GROMACS is a molecular dynamics application designed to simulate Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed to simulate biochemical molecules like proteins, lipids, and nucleic acids that have a lot of complicated bonded interactions.
Before running the NGC GROMACS container please ensure your system meets the following requirements.
The following examples demonstrate using the NGC GROMACS container to run the water_GMX50_bare
benchmark.
Throughout this example the container version will be referenced as $GROMACS_TAG
, replace this with the tag you wish to run.
Download the water_GMX50_bare
benchmark:
DATA_SET=water_GMX50_bare
wget -c https://ftp.gromacs.org/pub/benchmarks/${DATA_SET}.tar.gz
tar xf ${DATA_SET}.tar.gz
cd ./water-cut1.0_GMX50_bare/1536
DOCKER="nvidia-docker run -it --rm -v ${PWD}:/host_pwd --workdir /host_pwd nvcr.io/hpc/gromacs:${GROMACS_TAG}"
DOCKER="nvidia-docker run -it --rm -v ${PWD}:/host_pwd --workdir /host_pwd --device=/dev/infiniband --cap-add=IPC_LOCK --net=host nvcr.io/hpc/gromacs:${GROMACS_TAG}"
Prepare the benchmark data.
${DOCKER} gmx grompp -f pme.mdp
Run GROMACS.
${DOCKER} gmx mdrun -ntmpi 4 -nb gpu -pin on -v -noconfout -nsteps 5000 -ntomp 10 -s topol.tpr
Prepare the benchmark data
singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/gromacs:${GROMACS_TAG} gmx grompp -f pme.mdp
Run GROMACS
${SINGULARITY} gmx mdrun -ntmpi 4 -nb gpu -pin on -v -noconfout -nsteps 5000 -ntomp 10 -s topol.tpr
There is currently an issue in Singularity versions below v3.5 causing the LD_LIBRARY_PATH
to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH
must be unset before invoking Singularity:
LD_LIBRARY_PATH="" singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/gromacs:${GROMACS_TAG} gmx grompp -f pme.mdp
NVIDIA Base Command Platform (BCP) offers a ready-to-use cloud-hosted solution that manages the end-to-end lifecycle of development, workflows, and resource management. Before running the commands below, install and configure the ngc cli, more information can be found here.
Upload the stmv
dataset using the command below
ngc dataset upload --source ./stmv/ --desc "GROMACS stmv dataset" gromacs_dataset
As a note: we must include the -g <md-log-path>
and -e <energy log path>
to the run command because the mounted working directory is read-only, we must set the paths for the output logs to a writable mounted directory
Single node on a single GPU running the stmv
dataset on 4 GPUs with 2 MPI threads per GPU and 15 OpenMP threads per thread-MPI task
for a total of 120 CPU cores.
ngc batch run --name "gromacs_reducentomp120cores" --priority NORMAL --order 50 --preempt RUNONCE --min-timeslice 0s --total-runtime 0s --ace <your-ace> --instance dgxa100.80g.4.norm --commandline "/usr/bin/nventry -build_base_dir=/usr/local/gromacs -build_default=avx2_256 gmx mdrun -g /results/md.log -e /results/ener.edr -ntmpi 8 -ntomp 15 -nb gpu -pme gpu -npme 1 -update gpu -bonded gpu -nsteps 100000 -resetstep 90000 -noconfout -dlb no -nstlist 300 -pin on -v -gpu_id 0123" --result /results/ --image "hpc/gromacs:2022.3" --org <your-org> --datasetid <dataset-id>:/host_pwd/