NGC | Catalog

NAMD

For copy image paths and more information, please view on a desktop device.
Logo for NAMD

Description

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis.

Publisher

UIUC

Latest Tag

3.0-alpha11

Modified

March 1, 2023

Compressed Size

337.74 MB

Multinode Support

No

Multi-Arch Support

Yes

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM, and X-PLOR.

System Requirements

The following requirements must be met before running the NGC NAMD container:

Container Runtimes

x86_64

  • Pascal(sm60), Volta(sm70), or Ampere (sm80) NVIDIA GPU(s)
  • CUDA driver version >=450.36.06

Running NAMD Examples

Download Dataset

The ApoA1 benchmark consists of 92,224 atoms and has been a standard NAMD cross-platform benchmark for years. Follow the steps below to use the APOA1 input dataset to test the NGC NAMD container.

Download the APOA1 dataset:

wget -O - https://gitlab.com/NVHPC/ngc-examples/raw/master/namd/3.0/get_apoa1.sh | bash

Replace {input_file} in the examples below with the path to the apoa1 namd input file:

/host_pwd/apoa1/apoa1_nve_cuda_soa.namd

Select Tag

Several NAMD images are available, depending on your needs. Set the following environment variable which will be used in the example below.

export NAMD_TAG={TAG}

Where {TAG} is 3.0-alpha11 or any other tag previously posted on NGC.

Set the executable name depending on the chosen tag.

export NAMD_EXE=namd3

Running with nvidia-docker

NGC supports the Docker runtime through the nvidia-docker plugin.

Without Infiniband

DOCKER="nvidia-docker run -it --rm -v $(pwd):/host_pwd nvcr.io/hpc/namd:${NAMD_TAG}"

With Infiniband

DOCKER="nvidia-docker run -it --rm -v $(pwd):/host_pwd --device=/dev/infiniband --cap-add=IPC_LOCK --net=host nvcr.io/hpc/namd:${NAMD_TAG}"

Launch NAMD across all CPU cores, utilizing all GPUs, on your local machine or single node:

${DOCKER} ${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}

The nproc command is used to specify all available CPU cores should be used. Depending on system setup manually specifying the number of PE's may yield better performance.

An example shell script demonstrating this mode is available for 3.0-alpha11 tag.

Running with Singularity

The NGC NAMD container provides native Singularity runtime support.

Pull the Image

Save the NGC NAMD container as a local Singularity image file based upon if you're targeting a single workstation or a multi-node cluster.

singularity build ${NAMD_TAG}.sif docker://nvcr.io/hpc/namd:${NAMD_TAG}`

The container is now saved in the current directory as ${NAMD_TAG}.sif

Define the SINGULARITY command alias.

SINGULARITY="$(which singularity) exec --nv -B $(pwd):/host_pwd ${NAMD_TAG}.sif"

Launch NAMD across all CPU cores, utilizing all GPUs, on your local machine or single node:

${SINGULARITY} ${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}

The nproc command is used to specify all available CPU cores should be used. Depending on system setup manually specifying the number of PE's may yield better performance.

An example shell script demonstrating this mode is available for 3.0-alpha11 tag.

Note: Singularity 3.1.x - 3.5.x

There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

$ LD_LIBRARY_PATH="" singularity exec ...

Suggested Reading

NAMD Manual

charm++ Manual