NGC | Catalog

NAMD

For copy image paths and more information, please view on a desktop device.
Logo for NAMD

Description

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis.

Publisher

UIUC

Latest Tag

3.0-beta2

Modified

September 1, 2023

Compressed Size

1.6 GB

Multinode Support

No

Multi-Arch Support

Yes

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM, and X-PLOR.

System Requirements

The following requirements must be met before running the NGC NAMD container:

Container Runtimes

x86_64

  • Pascal(sm60), Volta(sm70), or Ampere (sm80) NVIDIA GPU(s)
  • CUDA driver version >=450.80.02

Running NAMD Examples

Download Dataset

The ApoA1 benchmark consists of 92,224 atoms and has been a standard NAMD cross-platform benchmark for years. Follow the steps below to use the APOA1 input dataset to test the NGC NAMD container.

Download the APOA1 dataset to your current directory:

wget -O - https://gitlab.com/NVHPC/ngc-examples/raw/master/namd/3.0/get_apoa1.sh | bash

Take a moment to inspect the shell script above. In particular, it injects the CUDASOAintegrate on in the configuration file, which enables the NAMD 3.0 GPU-resident mode code path.

Replace {input_file} in the examples below with the path to the apoa1 namd input file:

/host_pwd/apoa1/apoa1_nve_cuda_soa.namd

Select Tag

Several NAMD images are available, depending on your needs. Set the following environment variable which will be used in the example below.

export NAMD_TAG={TAG}

Where {TAG} is 3.0-beta2 or any other tag previously posted on NGC.

Set the executable name depending on the chosen tag.

export NAMD_EXE=namd3

Known Issue for tag 3.0-beta2

  • Using more than one thread per GPU will result in an error at the end of execution. This bug will be fixed in a coming version. To avoid this error, either use tag 3.0-alpha11 or specify both GPU-resident mode (CUDASOAintegrate on) and device migration (DeviceMigration on).

Running with nvidia-docker

NGC supports the Docker runtime through the nvidia-docker plugin.

Without Infiniband

DOCKER="nvidia-docker run -it --rm -v $(pwd):/host_pwd nvcr.io/hpc/namd:${NAMD_TAG}"

With Infiniband

DOCKER="nvidia-docker run -it --rm -v $(pwd):/host_pwd --device=/dev/infiniband --cap-add=IPC_LOCK --net=host nvcr.io/hpc/namd:${NAMD_TAG}"

Usage

Launch NAMD with 1 CPU thread, utilizing 1 GPU (simplest way for NAMD versions >= 3.0), on your local machine or single node:

${DOCKER} ${NAMD_EXE} +p1 +devices 0 +setcpuaffinity {input_file}

The +p argument is used to specify the number of cores to be used, and +devices specify the GPUs used. To use 2 CPU threads and 2 GPUs:

${DOCKER} ${NAMD_EXE} +p2 +devices 0,1 +setcpuaffinity {input_file}

An example shell script demonstrating this mode is available for 3.0-alpha11 tag.

In addition, for very large systems, multi-node simulations, or Pascal GPUs, it is recommanded to run NAMD 2.x. In NAMD 3, this can be achieved by setting CUDASOAintegrate off or simply not setting it in the configuration file. The input file /host_pwd/apoa1/apoa1_nve_cuda.namd (note the lack of _soa in file name) in the APOA1 dataset can be used to test this:

${DOCKER} ${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}

The nproc command is used to specify all available CPU cores should be used. Depending on system setup manually specifying the number of PE's may yield better performance.

Running with Singularity

The NGC NAMD container provides native Singularity runtime support.

Pull the Image

Save the NGC NAMD container as a local Singularity image file based upon if you're targeting a single workstation or a multi-node cluster.

singularity build ${NAMD_TAG}.sif docker://nvcr.io/hpc/namd:${NAMD_TAG}`

The container is now saved in the current directory as ${NAMD_TAG}.sif

Define the SINGULARITY command alias.

SINGULARITY="$(which singularity) exec --nv -B $(pwd):/host_pwd ${NAMD_TAG}.sif"

Launch NAMD with 1 CPU thread, utilizing 1 GPU (simplest way for NAMD versions >= 3.0), on your local machine or single node:

${SINGULARITY} ${NAMD_EXE} +p1 +devices 0 +setcpuaffinity {input_file}

The +p argument is used to specify the number of cores to be used, and +devices specify the GPUs used. To use 2 CPU threads and 2 GPUs:

${SINGULARITY} ${NAMD_EXE} +p2 +devices 0,1 +setcpuaffinity {input_file}

An example shell script demonstrating this mode is available for 3.0-alpha11 tag.

In addition, for very large systems, multi-node simulations, or Pascal GPUs, it is recommanded to run NAMD 2.x. In NAMD 3, this can be achieved by setting CUDASOAintegrate off or simply not setting it in the configuration file. The input file /host_pwd/apoa1/apoa1_nve_cuda.namd (note the lack of _soa in file name) in the APOA1 dataset can be used to test this:

${SINGULARITY} ${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}

The nproc command is used to specify all available CPU cores should be used. Depending on system setup manually specifying the number of PE's may yield better performance.

Note: Singularity 3.1.x - 3.5.x

There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

$ LD_LIBRARY_PATH="" singularity exec ...

Running on Base Platform Command

NVIDIA Base Command Platform (BCP) offers a ready-to-use cloud-hosted solution that manages the end-to-end lifecycle of development, workflows, and resource management. Before running the commands below, install and configure the ngc cli, more information can be found here.

Uploading the Dataset to BCP

Note: apoa1_nve_cuda_soa.namd needs to be modified to remove the outputName parameter due to the nature of the mounted read-only dataset directory.

Upload the apoa1 dataset using the command below

ngc dataset upload --source ./apoa1/ --desc "NAMD dataset" namd_dataset

Running NAMD on BCP

Single node on a single GPU running the apoa1 dataset

ngc batch run --name "NAMD_single_gpu" --priority NORMAL --order 50 --preempt RUNONCE --min-timeslice 0s --total-runtime 0s --ace <your-ace> --instance dgxa100.80g.1.norm --commandline "namd3 +p1 +devices 0 +setcpuaffinity --outputName /results/namd_output /work/apoa1_nve_cuda_soa.namd" --result /results/ --image "hpc/namd:${NAMD_TAG}" --org <your-org> --datasetid <datasetid>:/work/

Suggested Reading

NAMD Manual

charm++ Manual

BCP User Guide