NGC | Catalog
Welcome Guest

GROMACS

For pull tags and more information, please view on a desktop device.
Logo for GROMACS

Description

GROMACS is a popular molecular dynamics application used to simulate proteins and lipids.

Publisher

KTH Royal Institute of Technology

Latest Tag

2021.3

Modified

May 13, 2022

Compressed Size

444.71 MB

Multinode Support

No

Multi-Arch Support

Yes

2021.3 (Latest) Scan Results

Linux / amd64

Linux / arm64

GROMACS

GROMACS is a molecular dynamics application designed to simulate Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed to simulate biochemical molecules like proteins, lipids, and nucleic acids that have a lot of complicated bonded interactions.

System Requirements

The following requirements must be met before running the NGC GROMACS container:

Container Runtimes

x86_64

  • Pascal(sm60), Volta(sm70), or Ampere (sm80) NVIDIA GPU(s)
  • CPU supporting avx2_256 instruction set
  • CUDA driver version >= r460, -or- 418(>=40.04), r440(>=33.01), >=450.36.06

arm64

  • Pascal(sm60), Volta(sm70), or Ampere (sm80) NVIDIA GPU(s)
  • ARMv8 CPU
  • CUDA driver version >= r460

System Recommendations

  • GROMACS works well with Ampere A100, Volta V100 or Pascal P100 GPUs.
  • High Clock Rate is more important than number of cores, although having more than one thread per rank is good.
  • GROMACS will support multi-GPUs in one system, but needs several CPU cores for each GPU. it is best to start with one GPU using all CPU cores and then scale up to understand what performs best.
  • Launch multiple ranks per GPU to get better GPU utilization. For example on a 2 socket Broadwell server with 32 total cores and 4 P100, set ranks per GPU to 3, Threads to 2.

Running GROMACS Examples

Download Dataset

Download the water_GMX50_bare benchmark:

DATA_SET=water_GMX50_bare
wget -c https://ftp.gromacs.org/pub/benchmarks/DATA_SET.tar.gz
tar xf ${DATA_SET}.tar.gz
cd ./water-cut1.0_GMX50_bare/1536

Select Tag

Several GROMACS images are available, depending on your needs. Set the following environment variable which will be used in the example below.

export GROMACS_TAG={TAG}

Running with nvidia-docker

NGC supports the Docker runtime through the nvidia-docker plugin. This example is loosely designed and can be modified and adapted to best fit your system architecture.

Without Infiniband

DOCKER="nvidia-docker run -it --rm -v ${PWD}:/host_pwd --workdir /host_pwd nvcr.io/hpc/gromacs:${GROMACS_TAG}

With Infiniband

DOCKER="nvidia-docker run -it --rm -v ${PWD}:/host_pwd --workdir /host_pwd --device=/dev/infiniband --cap-add=IPC_LOCK --net=host nvcr.io/hpc/gromacs:${GROMACS_TAG}

Prepare the benchmark data.

${DOCKER} gmx grompp -f pme.mdp

Run GROMACS.

${DOCKER} gmx mdrun -ntmpi 4 -nb gpu -pin on -v -noconfout -nsteps 5000 -s -ntomp 10 topol.tpr

Running with Singularity

This example is loosely designed and can be modified and adapted to best fit your system architecture.

Pull the Image

Save the NGC Gromacs container as a local Singularity image file:

$ singularity build ${GROMACS_TAG}.sif docker://nvcr.io/hpc/gromacs:${GROMACS_TAG}

The container is now saved in the current directory as ${GROMACS_TAG}.sif

Define the SINGULARITY command alias.

SINGULARITY="singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${SIMG}"

Prepare benchmark data.

${SINGULARITY} gmx grompp -f pme.mdp

Run GROMACS.

${SINGULARITY} gmx mdrun -ntmpi 4 -nb gpu -pin on -v -noconfout -nsteps 5000 -s -ntomp 10 topol.tpr

Note: Singularity 3.1.x - 3.2.x

There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

$ LD_LIBRARY_PATH="" singularity exec ...

Suggested Reading

GROMACS

GROMACS GitHub

GROMACS Documentation

GROMACS GPU Acceleration

GROMACS 2020 GPU optimization