NGC | Catalog

PIConGPU

Logo for PIConGPU
Description
PIConGPU is a plasma physics application to solve the dynamics of a plasma by computing the motion of electrons and ions in the plasma field.
Publisher
HZDR
Latest Tag
june2018patch
Modified
April 1, 2024
Compressed Size
464.97 MB
Multinode Support
Yes
Multi-Arch Support
No

PIConGPU

PIConGPU is a fully relativistic, many GPGPU, 3D3V particle-in-cell (PIC) code. The PIC algorithm is a central tool in plasma physics. It describes the dynamics of a plasma by computing the motion of electrons and ions in the plasma based on Maxwell’s equations.

Typical applications that can be simulated with PIConGPU include the interaction of high-power lasers with matter, astrophysical plasmas or the propagation of electro-magnetic waves. More info on PIConGPU can be found here.

See here for a document describing prerequisites and setup steps for all HPC containers and nvidia-docker instructions for pulling NGC containers.

System requirements

Before running the NGC {APP_NAME} container please ensure your system meets the following requirements.

Running PIConGPU

Supported Architectures

NGC provides access to PIConGPU containers targeting the following NVIDIA GPU architectures.

  • Pascal(sm60)
  • Volta(sm70)
Executables
  • tbg: Abstracts program runtime options from technical details of supercomputers
  • pic-create: This tool is just a short-hand to create a new set of input files.
  • pic-build: This tool is actually a short-hand for an out-of-source build with CMake.
Command invocation

Example command

tbg -s bash -c  -t  
Examples

The following examples demonstrate how to run the NGC PIConGPU container under supported container runtimes.

For further information about PIConGPU workflow go here

Running with nvidia-docker

Command line execution with nvidia-docker

The docker image comes with a pre-compiled simulation for the laser-wakefield accelerator (LWFA) which we will use for our example

You can use the following command to start the example script for the LWFA simulation. Note the -p 2459:2459 allows you to connect to the isaac server in the container at port 2459.

# note: for 1 GPU,  change to: lwfa
# note: for 4 GPUs, change to: lwfa4
# note: for 8 GPUs, change to: lwfa8
nvidia-docker run --shm-size=1g --ulimit memlock=-1 -p 2459:2459 -it --rm nvcr.io/hpc/picongpu:july2018patch lwfa4

Interactive shell with nvidia-docker

Follow the instructions to run a simulation with your data input

nvidia-docker run --shm-size=1g --ulimit memlock=-1 -it nvcr.io/hpc/picongpu:july2018patch

This opens a pre-configured PIConGPU workspace ready for your modifications. The following basic workflow steps clone one of the available examples for modification, compile and run it:

  1. Clone the LWFA example to $HOME/picInputs/myLWFA
pic-create $PIC_EXAMPLES/LaserWakefield $HOME/picInputs/myLWFA

Now switch to your input directory:

cd $HOME/picInputs/myLWFA

You can edit input that requires a re-compile with: pic-edit --help For more information: pic-edit density laser Runtime options reside in the folder etc/picongpu/*.cfg, e.g.: editor etc/picongpu/1.cfg

  1. To build the tuned binary, use:
pic-build
  1. Now you run the simulation interactively:
tbg -s bash -c etc/picongpu/1.cfg -t etc/picongpu/bash/mpirun.tpl $HOME/runs/lwfa_001

Note: if you want to extract the data from your container to the host-system, mount your runs/ directory with -v on startup (docker run). See here.

Running with Singularity

Save the NGC PIConGPU container as a local Singularity image file:

$ singularity build picongpu_july2018patch.simg docker://nvcr.io/hpc/picongpu:july2018patch

This will save the container to current working directory as picongpu_july2018patch.simg

Once the local Singularity image has been pulled the following modes of running are supported.

Note: Singularity/2.x

When using Singularity/2.x NGC credentials must be supplied before running the build command above. More information describing how to obtain and use your NVIDIA NGC Cloud Services API key can be found here.

To set your NGC container registry authentication credentials:

export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
export SINGULARITY_DOCKER_PASSWORD=

Note: Singularity 3.1.x - 3.2.x

There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

$ LD_LIBRARY_PATH="" singularity exec ...

Command line execution with singularity

The singularity image comes with a pre-compiled simulation for the laser-wakefield accelerator (LWFA) which we will use for our example

You can use the following command to start the LWFA simulation.

# note: for 1 GPU,  change to: lwfa
# note: for 4 GPUs, change to: lwfa4
# note: for 8 GPUs, change to: lwfa8
singularity exec --nv picongpu_july2018patch.simg lwfa4

Interactive shell with singularity

Follow the instructions to run a simulation with your data input

singularity run --nv picongpu_july2018patch.simg

This opens a pre-configured PIConGPU workspace ready for your modifications. The following basic workflow steps clone one of the available examples for modification, compile and run it:

  1. Clone the LWFA example to $HOME/picInputs/myLWFA
pic-create $PIC_EXAMPLES/LaserWakefield $HOME/picInputs/myLWFA

Now switch to your input directory:

cd $HOME/picInputs/myLWFA

You can edit input that requires a re-compile with: pic-edit --help For more information: pic-edit density laser Runtime options reside in the folder etc/picongpu/*.cfg, e.g.: editor etc/picongpu/1.cfg

  1. To build the tuned binary, use:
pic-build
  1. Now you run the simulation interactively:
tbg -s bash -c etc/picongpu/1.cfg -t etc/picongpu/bash/mpirun.tpl $HOME/runs/lwfa_001

Note: if you want to extract the data from your container to the host-system, mount your runs/ directory with -B on startup (singularity run).

2. Suggested Reading

PIConGPU Documentation

Latest PIConGPU Releases Notes

Submit technical support and feature requests here

PIConGPU is open scientific software, please cite. More citation information here.