NGC | Catalog
Welcome Guest
CatalogContainersNAMD
NAMD
For pull tags and more information, please view on a desktop device.
Logo for NAMD

Description

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-comp atible with AMBER, CHARMM, and X-PLOR.

Publisher

UIUC

Latest Tag

3.0-alpha3-singlenode

Modified

February 25, 2022

Compressed Size

721.18 MB

Multinode Support

Yes

Multi-Arch Support

Yes

3.0-alpha3-singlenode (Latest) Scan Results

Linux / arm64

Linux / amd64

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-comp atible with AMBER, CHARMM, and X-PLOR.

System Requirements

The following requirements must be met before running the NGC NAMD container:

  • One of the following container runtimes
    • nvidia-docker
    • Singularity >= 3.1

3.0_alpha3-singlenode

x86_64

  • Pascal(sm60), Volta(sm70), or Ampere (sm80) NVIDIA GPU(s)
  • CUDA driver version >= r450, -or- r418, -or- r440

arm64

  • Pascal(sm60), Volta(sm70), or Ampere (sm80) NVIDIA GPU(s)
  • CUDA driver version >= r450

2.13-singlenode -or- 2.13-multinode

x86_64

  • Pascal(sm60) or Volta(sm70) NVIDIA GPU(s)
  • CUDA driver >= 387.26

Running NAMD

Examples

The ApoA1 benchmark consists of 92,224 atoms and has been a standard NAMD cross-platform benchmark for years. Follow the steps below to use the APOA1 input dataset to test the NGC NAMD container.

Download the APOA1 dataset:

wget -O - https://gitlab.com/NVHPC/ngc-examples/raw/master/namd/3.0/get_apoa1.sh | bash

Replace {input_file} in the examples below with the path to the apoa1 namd input file:

3.0_alpha3-singlenode

/host_pwd/apoa1/apoa1_nve_cuda_soa.namd

2.13-singlenode and 2.13-multinode

/host_pwd/apoa1/apoa1_nve_cuda.namd

Select tag

Several NAMD images are available, depending on your needs. Set the following environment variable which will be used in the example below.

export NAMD_TAG={TAG}

Where {TAG} is one of:

  • 3.0_alpha3-singlenode
  • 2.13-singlenode
  • 2.13-multinode

Set the executable name, depending on the chosen tag.

3.0_alpha3-singlenode

export NAMD_EXE=namd3

2.13-singlenode -or- 2.13-multinode

export NAMD_EXE=namd2

Running with Singularity

The NGC NAMD container provides native Singularity runtime support.

Pull the image

Save the NGC NAMD container as a local Singularity image file based upon if you're targeting a single workstation or a multi-node cluster.

singularity build ${NAMD_TAG}.sif docker://nvcr.io/hpc/namd:${NAMD_TAG}`

The container is now saved in the current directory as ${NAMD_TAG}.sif

Singularity Alias

To simplify the examples below define the SINGULARITY command alias, which may be set as an environment variable in your shell session or batch script.

SINGULARITY="$(which singularity) exec --nv -B $(pwd):/host_pwd ${NAMD_TAG}.sif"

Where:

  • --nv: expose the host GPU to the container
  • -B $(pwd):/host_pwd: expose the current working directory in the container at /host_pwd

Note: Singularity 3.1.x - 3.5.x

There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

$ LD_LIBRARY_PATH="" singularity exec ...

Local workstation with Singularity

This mode of running is suitable for interactive execution from a local workstation containing one or more GPUs. There are no requirements other than those stated in the System Requirements section.

Command line

Launch namd across all CPU cores, utilizing all GPUs, on the local machine:

${SINGULARITY} ${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}

The nproc command is used to specify all available CPU cores should be used. Depending on system setup manually specifying the number of PE's may yield better performance.

An example shell script demonstrating this mode is available:

  • 2.13-singlenode
  • 3.0_alpha3-singlenode

Interactive shell

Start an interactive shell within the container environment:

${SINGULARITY} /bin/bash

Run the example across all CPU cores, utilizing all GPUs:

${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}

Here the nproc command is used to specify all available CPU cores should be used. Depending on system setup manually specifying the number of PE's may yield better performance.

Container charmrun with singularity

The NAMD NGC container allows for parallel Charm++ jobs to be launched fully from within the container. This mode of running is suitable for most clusters.

Requirements

  • namd_2.13-multinode tag
  • Passwordless rsh/ssh between compute nodes

Running

The generation of a Charm++ nodelist is required and takes the following form:

host {hostname_1} ++cpu {cores_per_node}
host {hostname_2} ++cpu {cores_per_node}
...
host {hostname_n} ++cpu {cores_per_node}

Generation of this nodelist file will vary from cluster to cluster. Common examples include:

SLURM
NODELIST=$(pwd)/nodelist.${SLURM_JOBID}
for host in $(scontrol show hostnames); do
  echo "host ${host} ++cpus ${SLURM_CPUS_ON_NODE}" >> ${NODELIST}
done
PBS
PBS_TASK_COUNT=$(grep -c . ${PBS_NODEFILE})
PBS_NODE_COUNT=$(uniq ${PBS_NODEFILE} | wc -l)
PBS_TASKS_PER_NODE=$(( PBS_TASK_COUNT / PBS_NODE_COUNT ))

NODELIST=$(pwd)/.nodelist.${PBS_JOBID}
for host in $(uniq ${PBS_NODEFILE}); do
  echo "host ${host} ++cpus ${PBS_TASKS_PER_NODE}" >> ${NODELIST}
done

Set SSH options:

SSH="ssh -o PubkeyAcceptedKeyTypes=+ssh-dss -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR"

Set an alias to the charmrun command:

CHARMRUN="charmrun ++remote-shell ${SSH} ++nodelist ${NODELIST} ++p {procs_total}"

Launch namd2 using charmrun:

${SINGULARITY} ${CHARMRUN} ${SINGULARITY} namd2 +ppn {cores_per_node} +setcpuaffinity +idlepoll {input_file}

Where {cores_per_node} is typically set to the physical number of CPU cores on a given node and {procs_total} is set to {cores_per_node} - 1, allowing one core to be used for communication.

Batch script examples

Complete example batch scripts are provided which detail launching namd2 across a cluster. Slight modification may be necessary to ensure compatibility with your specific cluster.

  • Slurm batch script
  • PBS batch script

Running with nvidia-docker

NGC supports the Docker runtime through the nvidia-docker plugin.

nvidia-docker Aliases

To simplify the examples below define the DOCKER command alias, which may be set as an environment variable in your shell or batch script.

Without infiniband hardware

DOCKER="nvidia-docker run -it --rm -v $(pwd):/host_pwd nvcr.io/hpc/namd:${NAMD_TAG}"

With infiniband hardware

DOCKER="nvidia-docker run -it --rm -v $(pwd):/host_pwd --device=/dev/infiniband --cap-add=IPC_LOCK --net=host nvcr.io/hpc/namd:${NAMD_TAG}"

Where:

  • DOCKER: alias used to define the base docker command
  • -it: allocate ptty
  • --rm: remove container on exit
  • -v $(pwd):/host_pwd: expose the current working directory in the container as /host_pwd
  • --device=/dev/infiniband --cap-add=IPC_LOCK --net=host: allow access to host infiniband device(s)
  • ${NAMD_TAG}: tag name set earlier

Local workstation with nvidia-docker

This mode of running is suitable for interactive execution from a local workstation containing one or more GPUs. There are no requirements other than those stated in the System Requirements section.

Command line execution with nvidia-docker

Launch NAMD across all CPU cores, utilizing all GPUs, on the local machine:

${DOCKER} ${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}

The nproc command is used to specify all available CPU cores should be used. Depending on system setup manually specifying the number of PE's may yield better performance.

An example shell script demonstrating this mode is available:

  • 2.13-singlenode
  • 3.0_alpha3-singlenode

Interactive shell with nvidia-docker

Start an interactive shell within the container environment:

${DOCKER}

Launch NAMD across all CPU cores, utilizing all GPUs:

${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}

The nproc command is used to specify all available CPU cores should be used. Depending on system setup manually specifying the number of PE's may yield better performance.

Suggested Reading

NAMD Manual charm++ Manual