HOOMD-blue is a highly flexible and scalable particle simulation toolkit. It makes use of high-level Python scripts to set initial conditions, control simulation parameters, and extract data for in situ analysis.
More information about HOOMD-blue is available on the HOOMD-blue webpage.
Please Cite HOOMD-blue if it is used in any published work.
Before running the NGC HOOMD-blue container, please ensure that your system meets the following requirements.
A HOOMD-blue executable optimized for your system hardware will be chosen automatically at runtime.
NOTE
Typical HOOMD-blue invocation involves executing a HOOMD-blue script with Python.
$ python3 script.py [options]
Where
python3
: Python interpreter executablescript.py
: script containing instructions for HOOMD-blue execution[options]
: command-line options for HOOMD-blue. An exhaustive list of options is available on
the HOOMD-blue documentation websiteHOOMD-blue relies on Python scripts for instructions to run. The HOOMD-blue developers maintain a collection of
example scripts for benchmarking, available via git
:
$ git clone https://github.com/joaander/hoomd-benchmarks.git
This command downloads several sub-folders containing benchmarking scripts. The scripts can be bind-mounted and executed by the HOOMD-blue container. For example, to run the microspheres benchmark:
Docker:
$ cd hoomd-benchmarks/microsphere
$ nvidia-docker run -ti --rm --privileged -v $(pwd):/host_pwd nvcr.io/hpc/hoomd-blue:v2.6.0 python3 /host_pwd/bmark.py
Singularity:
$ cd hoomd-benchmarks/microsphere
$ singularity build hoomd-blue_v2.6.0.simg docker://nvcr.io/hpc/hoomd-blue:v2.6.0
$ singularity run --nv hoomd-blue_v2.6.0.simg python3 bmark.py
More detailed instructions on using the NGC HOOMD-blue container in Docker and Singularity can be found below.
Save the NGC HOOMD-blue container as a local Singularity image file:
$ singularity build hoomd-blue_v2.6.0.simg docker://nvcr.io/hpc/hoomd-blue:v2.6.0
This command saves the container in the current directory as hoomd-blue_v2.6.0.simg
In order to pull NGC images with singularity
version 2.x and earlier, NGC container registry authentication credentials are required.
To set your NGC container registry authentication credentials:
$ export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
$ export SINGULARITY_DOCKER_PASSWORD=
More information describing how to obtain and use your NVIDIA NGC Cloud Services API key can be found here.
There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH
to be incorrectly set within the container environment.
As a workaround The LD_LIBRARY_PATH
must be unset before invoking Singularity:
$ LD_LIBRARY_PATH="" singularity exec ...
Once the local Singularity image has been pulled, the following modes of running are supported:
To simplify the examples below, define the following command aliases. These may be set as environment variables in a shell or batch script.
SINGULARITY
will be used to launch processes within the NGC HOOMD-blue container using the Singularity runtime:
$ export SINGULARITY="$(which singularity) run --nv -B $(pwd):/host_pwd hoomd-blue_v2.6.0.simg"
Where:
run
: specifies mode of execution--nv
: exposes the host GPU to the container-B $(pwd):/host_pwd
: bind mounts the current working directory in the container at /host_pwd
hoomd-blue_v2.6.0.simg
: path of the saved Singularity image fileThis mode of running is suitable for interactive execution from a local workstation containing one or more GPUs. There are no requirements other than those stated in the System Requirements section.
To launch one HOOMD-blue process per GPU, use:
$ ${SINGULARITY} mpirun -mca pml ^ucx -mca btl smcuda,self --bind-to core -n python3 script.py
Where:
-mca pml ^ucx -mca btl smcuda,self
: MPI parameters to disable UCX and set the byte-transfer layer (may significantly increase single-node performance)--bind-to core
: Distributes MPI ranks evenly among CPU cores (ensures GPU affinity)SINGULARITY
: singularity alias defined abovescript.py
: path of a HOOMD-blue Python scriptTo invoke an interactive shell, run /bin/bash
within the container:
$ ${SINGULARITY} /bin/bash
While Singularity provides an interactive shell via singularity shell
, this invocation ignores container entrypoint
scripts. Thus, the preferred method to access an interactive shell is via a singularity run
command directed at /bin/bash
.
To run a HOOMD-blue Python script while using the interactive shell:
$ mpirun -mca pml ^ucx -mca btl smcuda,self --bind-to core -n python3 /host_pwdscript.py
Where:
-mca pml ^ucx -mca btl smcuda,self
: MPI parameters to disable UCX and set the byte-transfer layer (may significantly increase single-node performance)--bind-to core
: Distributes MPI ranks evenly among CPU cores (ensures GPU affinity)/host_pwd/script.py
: path of a HOOMD-blue Python scriptClusters with a local compatible OpenMPI installation may launch the NGC HOOMD-blue container using the host provided mpirun
or mpiexec
launcher.
To use the cluster provided mpirun
command to launch the NGC HOOMD-blue container, OpenMPI/3.0.2
or newer is required.
Running with mpirun
maintains tight integration with the resource manager.
Launch HOOMD-blue within the container, using mpirun
:
$ mpirun -n --bind-to core ${SINGULARITY} python3 /host_pwd/script.py
Where:
--bind-to core
: Distributes MPI ranks evenly among CPU cores (ensures GPU affinity)SINGULARITY
: Singularity alias defined above/host_pwd/script.py
: path of a HOOMD-blue Python scriptThe NGC HOOMD-blue container allows the user to launch parallel MPI jobs from fully within the container. This mode has the least host requirements, but does necessitate additional setup steps, as described below.
The internal container OpenMPI installation requires an OpenMPI hostfile to specify the addresses of all nodes in the cluster. The OpenMPI hostfile takes the following form:
...
Generation of this nodelist file via bash script will vary from cluster to cluster. Common examples include:
HOSTFILE=".hostfile.${SLURM_JOB_ID}"
for host in $(scontrol show hostnames); do
echo "${host}" >> ${HOSTFILE}
done
HOSTFILE=$(pwd)/.hostfile.${PBS_JOBID}
for host in $(uniq ${PBS_NODEFILE}); do
echo "${host}" >> ${HOSTFILE}
done
Additionally, mpirun
must be configured to start the OpenMPI orted
process within the container runtime. Set environment variables so that mpirun
starts the OpenMPI orted
process within the container:
$ export SIMG=hoomd-blue_v2.6.0.simg
$ export OMPI_MCA_plm=rsh
$ export OMPI_MCA_plm_rsh_args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR'
$ export OMPI_MCA_orte_launch_agent="${SINGULARITY} /usr/bin/orted"
To launch HOOMD-blue using mpirun
:
$ ${SINGULARITY} --check_gpu=false mpirun --hostfile= --np= --bind-to core python3 /host_pwd/script.py
Where:
SINGULARITY
: Singularity alias defined above--check_gpu=false
: skip check for a GPU on the launch node--bind-to core
: Distributes MPI ranks evenly among CPU cores (ensures GPU affinity)/host_pwd/script.py
: path of a HOOMD-blue Python scriptNGC supports the Docker runtime through the nvidia-docker plugin.
To simplify the examples below, define the following command aliases. These may be set as environment variables in a shell or batch script.
DOCKER
will be used to launch processes within the NGC HOOMD-blue container using the nvidia-docker
runtime:
$ export DOCKER="nvidia-docker run --device=/dev/infiniband --cap-add=IPC_LOCK --privileged -it --rm -v $(pwd):/host_pwd nvcr.io/hpc/hoomdblue:v2.6.0"
Where:
DOCKER
: alias used to store the base Docker commandrun
: specifies the mode of execution--device=/dev/infiniband --cap-add=IPC_LOCK
: grants container access to host infiniband device(s)--privileged
: grants container access to host resources-it
: allocates ptty--rm
: makes the container ephemeral (remove the container on exit)-v $(pwd):/host_pwd
: bind mounts the current working directory in the container as /host_pwd
nvcr.io/hpc/hoomd-blue:v2.6.0
: URI to the NGC HOOMD-blue imageThis mode of running is suitable for interactive execution from a local workstation containing one or more GPUs. There are no requirements other than those stated in the System Requirements section.
To launch one HOOMD-blue process per GPU, use:
$ ${DOCKER} mpirun --allow-run-as-root -mca pml ^ucx -mca btl smcuda,self -n python3 /host_pwd/script.py
Where:
-mca pml ^ucx -mca btl smcuda,self
: MPI parameters to disable UCX and set the byte-transfer layer (may significantly increase single-node performance)/host_pwd/script.py
: the path of a HOOMD-blue Python scriptTo start an interactive shell within the container environment, launch the container using the alias set earlier, DOCKER
:
$ ${DOCKER}
To run a HOOMD-blue Python script while using the interactive shell:
$ mpirun -n python3 /host_pwd/script.py
Where:
/host_pwd/script.py
: the path of a HOOMD-blue Python script