NGC | Catalog

GROMACS

For copy image paths and more information, please view on a desktop device.
Logo for GROMACS

Description

GROMACS is a popular molecular dynamics application used to simulate proteins and lipids.

Publisher

KTH Royal Institute of Technology

Latest Tag

2022.3

Modified

February 1, 2023

Compressed Size

422.58 MB

Multinode Support

No

Multi-Arch Support

Yes

GROMACS

GROMACS is a molecular dynamics application designed to simulate Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed to simulate biochemical molecules like proteins, lipids, and nucleic acids that have a lot of complicated bonded interactions.

System requirements

Before running the NGC GROMACS container please ensure your system meets the following requirements.

  • One of the following container runtimes
  • One of the following NVIDIA GPU(s)
    • Pascal(sm60)
    • Volta (sm70)
    • Ampere (sm80)
    • Hopper (sm90)

x86_64

  • CPU with AVX instruction support
  • One of the following CUDA driver versions
    • r520 (>=.61.05)
    • >= 450.80.02

arm64

  • Marvell ThunderX2 CPU
  • CUDA driver version >= r460

System Recommendations

  • High Clock Rate is more important than number of cores, although having more than one thread per rank is good.
  • GROMACS has multi-GPU support, but needs several CPU cores for each GPU. When starting out, it is best to run with a single GPU using all CPU cores and then scale up to determine which configuration performs best.
  • Launch multiple ranks per GPU for better GPU utilization. For example, on a 2 socket Broadwell server that has 32 total cores and 4 P100s, set ranks per GPU to 3 and threads to 2.
Examples

The following examples demonstrate using the NGC GROMACS container to run the water_GMX50_bare benchmark. Throughout this example the container version will be referenced as $GROMACS_TAG, replace this with the tag you wish to run.

Download the water_GMX50_bare benchmark:

DATA_SET=water_GMX50_bare
wget -c https://ftp.gromacs.org/pub/benchmarks/DATA_SET.tar.gz
tar xf ${DATA_SET}.tar.gz
cd ./water-cut1.0_GMX50_bare/1536

Running with nvidia-docker

Without Infiniband

DOCKER="nvidia-docker run -it --rm -v ${PWD}:/host_pwd --workdir /host_pwd nvcr.io/hpc/gromacs:${GROMACS_TAG}

With Infiniband

DOCKER="nvidia-docker run -it --rm -v ${PWD}:/host_pwd --workdir /host_pwd --device=/dev/infiniband --cap-add=IPC_LOCK --net=host nvcr.io/hpc/gromacs:${GROMACS_TAG}

Prepare the benchmark data.

${DOCKER} gmx grompp -f pme.mdp

Run GROMACS.

${DOCKER} gmx mdrun -ntmpi 4 -nb gpu -pin on -v -noconfout -nsteps 5000 -s -ntomp 10 topol.tpr

Running with Singularity

Prepare the benchmark data

singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/gromacs:${GROMACS_TAG} gmx grompp -f pme.mdp

Run GROMACS

${SINGULARITY} gmx mdrun -ntmpi 4 -nb gpu -pin on -v -noconfout -nsteps 5000 -s -ntomp 10 topol.tpr

Note: Singularity < v3.5

There is currently an issue in Singularity versions below v3.5 causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

LD_LIBRARY_PATH="" singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/gromacs:${GROMACS_TAG} gmx grompp -f pme.mdp

Suggested Reading

GROMACS

GROMACS GitHub

GROMACS Documentation

GROMACS GPU Acceleration

GROMACS 2020 GPU optimization