
Linux / arm64
Linux / amd64
NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-comp atible with AMBER, CHARMM, and X-PLOR.
The following requirements must be met before running the NGC NAMD container:
The ApoA1 benchmark consists of 92,224 atoms and has been a standard NAMD cross-platform benchmark for years. Follow the steps below to use the APOA1 input dataset to test the NGC NAMD container.
Download the APOA1 dataset:
wget -O - https://gitlab.com/NVHPC/ngc-examples/raw/master/namd/3.0/get_apoa1.sh | bash
Replace {input_file} in the examples below with the path to the apoa1 namd input file:
/host_pwd/apoa1/apoa1_nve_cuda_soa.namd
/host_pwd/apoa1/apoa1_nve_cuda.namd
Several NAMD images are available, depending on your needs. Set the following environment variable which will be used in the example below.
export NAMD_TAG={TAG}
Where {TAG} is one of:
3.0_alpha3-singlenode2.13-singlenode2.13-multinodeSet the executable name, depending on the chosen tag.
export NAMD_EXE=namd3
export NAMD_EXE=namd2
The NGC NAMD container provides native Singularity runtime support.
Save the NGC NAMD container as a local Singularity image file based upon if you're targeting a single workstation or a multi-node cluster.
singularity build ${NAMD_TAG}.sif docker://nvcr.io/hpc/namd:${NAMD_TAG}`
The container is now saved in the current directory as ${NAMD_TAG}.sif
To simplify the examples below define the SINGULARITY command alias, which may be set as an environment variable in
your shell session or batch script.
SINGULARITY="$(which singularity) exec --nv -B $(pwd):/host_pwd ${NAMD_TAG}.sif"
Where:
--nv: expose the host GPU to the container-B $(pwd):/host_pwd: expose the current working directory in the container at /host_pwdThere is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment.
As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:
$ LD_LIBRARY_PATH="" singularity exec ...
This mode of running is suitable for interactive execution from a local workstation containing one or more GPUs. There are no requirements other than those stated in the System Requirements section.
Launch namd across all CPU cores, utilizing all GPUs, on the local machine:
${SINGULARITY} ${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}
The nproc command is used to specify all available CPU cores should be used. Depending on system setup
manually specifying the number of PE's may yield better performance.
An example shell script demonstrating this mode is available:
Start an interactive shell within the container environment:
${SINGULARITY} /bin/bash
Run the example across all CPU cores, utilizing all GPUs:
${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}
Here the nproc command is used to specify all available CPU cores should be used. Depending on system setup
manually specifying the number of PE's may yield better performance.
The NAMD NGC container allows for parallel Charm++ jobs to be launched fully from within the container. This mode of running is suitable for most clusters.
The generation of a Charm++ nodelist is required and takes the following form:
host {hostname_1} ++cpu {cores_per_node}
host {hostname_2} ++cpu {cores_per_node}
...
host {hostname_n} ++cpu {cores_per_node}
Generation of this nodelist file will vary from cluster to cluster. Common examples include:
NODELIST=$(pwd)/nodelist.${SLURM_JOBID}
for host in $(scontrol show hostnames); do
echo "host ${host} ++cpus ${SLURM_CPUS_ON_NODE}" >> ${NODELIST}
done
PBS_TASK_COUNT=$(grep -c . ${PBS_NODEFILE})
PBS_NODE_COUNT=$(uniq ${PBS_NODEFILE} | wc -l)
PBS_TASKS_PER_NODE=$(( PBS_TASK_COUNT / PBS_NODE_COUNT ))
NODELIST=$(pwd)/.nodelist.${PBS_JOBID}
for host in $(uniq ${PBS_NODEFILE}); do
echo "host ${host} ++cpus ${PBS_TASKS_PER_NODE}" >> ${NODELIST}
done
Set SSH options:
SSH="ssh -o PubkeyAcceptedKeyTypes=+ssh-dss -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR"
Set an alias to the charmrun command:
CHARMRUN="charmrun ++remote-shell ${SSH} ++nodelist ${NODELIST} ++p {procs_total}"
Launch namd2 using charmrun:
${SINGULARITY} ${CHARMRUN} ${SINGULARITY} namd2 +ppn {cores_per_node} +setcpuaffinity +idlepoll {input_file}
Where {cores_per_node} is typically set to the physical number of CPU cores on a given node and {procs_total} is
set to {cores_per_node} - 1, allowing one core to be used for communication.
Complete example batch scripts are provided which detail launching namd2 across a cluster. Slight modification may be
necessary to ensure compatibility with your specific cluster.
NGC supports the Docker runtime through the nvidia-docker plugin.
To simplify the examples below define the DOCKER command alias, which may be set as an environment variable in
your shell or batch script.
DOCKER="nvidia-docker run -it --rm -v $(pwd):/host_pwd nvcr.io/hpc/namd:${NAMD_TAG}"
DOCKER="nvidia-docker run -it --rm -v $(pwd):/host_pwd --device=/dev/infiniband --cap-add=IPC_LOCK --net=host nvcr.io/hpc/namd:${NAMD_TAG}"
Where:
DOCKER: alias used to define the base docker command-it: allocate ptty--rm: remove container on exit-v $(pwd):/host_pwd: expose the current working directory in the container as /host_pwd--device=/dev/infiniband --cap-add=IPC_LOCK --net=host: allow access to host infiniband device(s)${NAMD_TAG}: tag name set earlierThis mode of running is suitable for interactive execution from a local workstation containing one or more GPUs. There are no requirements other than those stated in the System Requirements section.
Launch NAMD across all CPU cores, utilizing all GPUs, on the local machine:
${DOCKER} ${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}
The nproc command is used to specify all available CPU cores should be used. Depending on system setup
manually specifying the number of PE's may yield better performance.
An example shell script demonstrating this mode is available:
Start an interactive shell within the container environment:
${DOCKER}
Launch NAMD across all CPU cores, utilizing all GPUs:
${NAMD_EXE} +ppn $(nproc) +setcpuaffinity +idlepoll {input_file}
The nproc command is used to specify all available CPU cores should be used. Depending on system setup
manually specifying the number of PE's may yield better performance.