NGC | Catalog
Welcome Guest

GROMACS

For pull tags and more information, please view on a desktop device.
Logo for GROMACS

Description

GROMACS is a popular molecular dynamics application used to simulate proteins and lipids.

Publisher

KTH Royal Institute of Technology

Latest Tag

2022.1

Modified

July 1, 2022

Compressed Size

315.06 MB

Multinode Support

No

Multi-Arch Support

Yes

2022.1 (Latest) Scan Results

Linux / arm64

Linux / amd64

GROMACS

GROMACS is a molecular dynamics application designed to simulate Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed to simulate biochemical molecules like proteins, lipids, and nucleic acids that have a lot of complicated bonded interactions. More info on GROMACS can be found at http://www.gromacs.org/

See here for a document describing prerequisites and setup steps for all HPC containers and instructions for pulling NGC containers.

System requirements

Before running the NGC GROMACS container please ensure your system meets the following requirements.

x86_64

  • Pascal(sm60), Volta(sm70), or Ampere (sm80) NVIDIA GPU(s)
  • CPU supporting avx2_256 instruction set
  • CUDA driver version >= r450, -or- r418, -or- r440

arm64

  • Pascal(sm60), Volta(sm70), or Ampere (sm80) NVIDIA GPU(s)
  • ARMv8 CPU
  • CUDA driver version >= r450

System Recommendations

  • GROMACS works well with Ampere A100, Volta V100 or Pascal P100 GPUs.
  • High Clock Rate is more important than number of cores, although having more than one thread per rank is good.
  • GROMACS will support multi-GPUs in one system, but needs several CPU cores for each GPU. it is best to start with one GPU using all CPU cores and then scale up to understand what performs best.
  • Launch multiple ranks per GPU to get better GPU utilization. For example on a 2 socket Broadwell server with 32 total cores and 4 P100, set ranks per GPU to 3, Threads to 2.

Running GROMACS

Executables

gmx: primary GROMACS executable

Command invocation

An example command is:

gmx mdrun -s .../example.tpr

Where

  • -s example.tpr: Portable xdr run input file
Environment variables

GMX_GPU_DD_COMMS: Set to true to enable halo exchange communications between PP tasks GMX_GPU_PME_PP_COMMS: Set to true to enable communications between PME and PP tasks GMX_FORCE_UPDATE_DEFAULT_GPU: Set to true to enable update and constraints part of the timestep for multi-GPU

Examples

The following examples demonstrate how to run the NGC GROMACS container under the supported runtimes.

Running with nvidia-docker

Command line execution with nvidia-docker

docker run -ti --runtime nvidia -v /dev/infiniband:/dev/inifiniband -v $(pwd)/gromacs_benchmarks/adh_cubic:/benchmark --workdir /benchmark nvcr.io/hpc/gromacs:[app_tag] sh -c "gmx grompp -f pme_verlet.mdp"
docker run -ti --runtime nvidia -v /dev/infiniband:/dev/inifiniband -v $(pwd)/gromacs_benchmarks/adh_cubic:/benchmark --workdir /benchmark nvcr.io/hpc/gromacs:[app_tag] sh -c "gmx mdrun -v -nsteps 100000 -resetstep 90000 -noconfout -ntmpi 4 -ntomp 10 -nb gpu -bonded gpu -pme gpu -npme 1 -nstlist 400 -s topol.tpr" 

This example script downloads the adh_cubic benchmark data, pre-processes the data, and then runs GROMACS with the data.

Example of successful GROMACS output:

Dynamic load balancing report:
 DLB was off during the run due to low measured imbalance.
 Average load imbalance: 1.9%.
 The balanceable part of the MD step is 60%, load imbalance is computed from this.
 Part of the total run time spent waiting due to load imbalance: 1.2%.
 Average PME mesh/force load: 1.084
 Part of the total run time spent waiting due to PP/PME imbalance: 2.2 %


               Core t (s)   Wall t (s)        (%)
       Time:      695.109       17.380     3999.5
                 (ns/day)    (hour/ns)
Performance:       99.436        0.241


GROMACS reminds you: "Boom Boom Boom Boom, I Want You in My Room" (Venga Boys)

The example script is designed to facilitate customization of the data set and GROMACS run command used. This allows the example script to be adapted to a variety of uses beyond checking GROMACS container functionality.

Interactive shell with nvidia-docker

The following command will launch an interactive shell in the GROMACS container using nvidia-docker mounting $HOME/data from the underlying system as /data in the container:

$ docker run -it --rm --runtime nvidia --privileged -v $HOME/data:/data nvcr.io/hpc/gromacs:[app_tag]

Where:

  • -it: start container with an interactive terminal (short for --interactive --tty)
  • --rm: make container ephemeral (removes container on exit)
  • -v $(pwd):/host_pwd: bind mount the current working directory into the container as /host_pwd
  • --runtime nvidia: allow nvidia GPU's usage
  • --privileged: allow other devices like infiniband a few more details about infiniband

This should produce a root prompt within the container:

root@3a8c8b7c3a88:/workspace#

Running with Singularity

Pull the image

Save the NGC Gromacs container as a local Singularity image file:

$ singularity build gromacs-2020_2.sif docker://nvcr.io/hpc/gromacs:2020.2
$ export SIF=$(pwd)/gromacs-2020_2.sif

The Gromacs Singularity image is now saved in the current directory as gromacs-2020_2.sif

Note: Singularity/2.x

In order to pull NGC images with singularity version 2.x and earlier, NGC container registry authentication credentials are required.

To set your NGC container registry authentication credentials:

$ export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
$ export SINGULARITY_DOCKER_PASSWORD=

More information describing how to obtain and use your NVIDIA NGC Cloud Services API key can be found here.

Note: Singularity 3.1.x - 3.2.x

There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

$ LD_LIBRARY_PATH="" singularity exec ...

Command line execution

In order to run the water benchmark: download, permission, and run the example script from the NGC Examples Repository.

Singularity will mount the host PWD to /host_pwd in the container

SINGULARITY="singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${SIF}"

Prepare benchmark data

${SINGULARITY} gmx grompp -f pme.mdp

Run benchmark

${SINGULARITY} gmx mdrun -v -nsteps 100000 -resetstep 90000 -noconfout -ntmpi 4 -ntomp 10 -nb gpu -bonded gpu -pme gpu -npme 1 -nstlist 400 -s topol.tpr

This example script downloads the water GMX50 bare benchmark data, pre-processes the data, and then runs GROMACS with the data.

Example of successful GROMACS output:

Dynamic load balancing report:
 DLB was off during the run due to low measured imbalance.
 Average load imbalance: 1.9%.
 The balanceable part of the MD step is 60%, load imbalance is computed from this.
 Part of the total run time spent waiting due to load imbalance: 1.2%.
 Average PME mesh/force load: 1.084
 Part of the total run time spent waiting due to PP/PME imbalance: 2.2 %


               Core t (s)   Wall t (s)        (%)
       Time:      695.109       17.380     3999.5
                 (ns/day)    (hour/ns)
Performance:       99.436        0.241


GROMACS reminds you: "Boom Boom Boom Boom, I Want You in My Room" (Venga Boys)

The example script is designed to facilitate customization of the data set and GROMACS run command used. This allows the example script to be adapted to a variety of uses beyond checking GROMACS container functionality.

Interactive shell

The following command will launch an interactive shell in the GROMACS container using singularity shell:

$ singularity run --nv gromacs-2020_2.sif /bin/bash

Where:

  • --nv: expose the host GPU(s) to the container

This should produce a Singularity shell prompt within the container:

INFO Configured container for NVIDIA GPU architecture sm70
Singularity >

Suggested Reading

GROMACS Documentation GROMACS 2020 GPU optimization