NGC | Catalog

GROMACS

For copy image paths and more information, please view on a desktop device.
Logo for GROMACS

Description

GROMACS is a popular molecular dynamics application used to simulate proteins and lipids.

Publisher

KTH Royal Institute of Technology

Latest Tag

2022.3

Modified

June 1, 2023

Compressed Size

422.58 MB

Multinode Support

No

Multi-Arch Support

Yes

GROMACS

GROMACS is a molecular dynamics application designed to simulate Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed to simulate biochemical molecules like proteins, lipids, and nucleic acids that have a lot of complicated bonded interactions.

System requirements

Before running the NGC GROMACS container please ensure your system meets the following requirements.

  • One of the following container runtimes
  • One of the following NVIDIA GPU(s)
    • Pascal(sm60)
    • Volta (sm70)
    • Ampere (sm80)
    • Hopper (sm90)

x86_64

  • CPU with AVX instruction support
  • One of the following CUDA driver versions
    • r520 (>=.61.05)
    • >= 450.80.02

arm64

  • Marvell ThunderX2 CPU
  • CUDA driver version >= r460

System Recommendations

  • High Clock Rate is more important than number of cores, although having more than one thread per rank is good.
  • GROMACS has multi-GPU support, but needs several CPU cores for each GPU. When starting out, it is best to run with a single GPU using all CPU cores and then scale up to determine which configuration performs best.
  • Launch multiple ranks per GPU for better GPU utilization. For example, on a 2 socket Broadwell server that has 32 total cores and 4 P100s, set ranks per GPU to 3 and threads to 2.
Examples

The following examples demonstrate using the NGC GROMACS container to run the water_GMX50_bare benchmark. Throughout this example the container version will be referenced as $GROMACS_TAG, replace this with the tag you wish to run.

Download the water_GMX50_bare benchmark:

DATA_SET=water_GMX50_bare
wget -c https://ftp.gromacs.org/pub/benchmarks/${DATA_SET}.tar.gz
tar xf ${DATA_SET}.tar.gz
cd ./water-cut1.0_GMX50_bare/1536

Running with nvidia-docker

Without Infiniband

DOCKER="nvidia-docker run -it --rm -v ${PWD}:/host_pwd --workdir /host_pwd nvcr.io/hpc/gromacs:${GROMACS_TAG}"

With Infiniband

DOCKER="nvidia-docker run -it --rm -v ${PWD}:/host_pwd --workdir /host_pwd --device=/dev/infiniband --cap-add=IPC_LOCK --net=host nvcr.io/hpc/gromacs:${GROMACS_TAG}"

Prepare the benchmark data.

${DOCKER} gmx grompp -f pme.mdp

Run GROMACS.

${DOCKER} gmx mdrun -ntmpi 4 -nb gpu -pin on -v -noconfout -nsteps 5000 -ntomp 10 -s topol.tpr

Running with Singularity

Prepare the benchmark data

singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/gromacs:${GROMACS_TAG} gmx grompp -f pme.mdp

Run GROMACS

${SINGULARITY} gmx mdrun -ntmpi 4 -nb gpu -pin on -v -noconfout -nsteps 5000 -ntomp 10  -s topol.tpr

Note: Singularity < v3.5

There is currently an issue in Singularity versions below v3.5 causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

LD_LIBRARY_PATH="" singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/gromacs:${GROMACS_TAG} gmx grompp -f pme.mdp

Running on Base Platform Command

NVIDIA Base Command Platform (BCP) offers a ready-to-use cloud-hosted solution that manages the end-to-end lifecycle of development, workflows, and resource management. Before running the commands below, install and configure the ngc cli, more information can be found here.

Uploading the Dataset to BCP

Upload the stmv dataset using the command below

ngc dataset upload --source ./stmv/ --desc "GROMACS stmv dataset" gromacs_dataset
Running GROMACS on BCP

As a note: we must include the -g <md-log-path> and -e <energy log path> to the run command because the mounted working directory is read-only, we must set the paths for the output logs to a writable mounted directory

Single node on a single GPU running the stmv dataset on 4 GPUs with 2 MPI threads per GPU and 15 OpenMP threads per thread-MPI task for a total of 120 CPU cores.

ngc batch run --name "gromacs_reducentomp120cores" --priority NORMAL --order 50 --preempt RUNONCE --min-timeslice 0s --total-runtime 0s --ace <your-ace> --instance dgxa100.80g.4.norm --commandline "/usr/bin/nventry -build_base_dir=/usr/local/gromacs -build_default=avx2_256 gmx mdrun -g /results/md.log -e /results/ener.edr -ntmpi 8 -ntomp 15 -nb gpu -pme gpu -npme 1 -update gpu -bonded gpu -nsteps 100000 -resetstep 90000 -noconfout -dlb no -nstlist 300 -pin on -v -gpu_id 0123" --result /results/ --image "hpc/gromacs:2022.3" --org <your-org> --datasetid <dataset-id>:/host_pwd/

Suggested Reading

GROMACS

GROMACS GitHub

GROMACS Documentation

GROMACS GPU Acceleration

GROMACS 2020 GPU optimization

BCP User Guide