NGC | Catalog

LAMMPS

Logo for LAMMPS
Description
Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a software application designed for molecular dynamics simulations.
Publisher
Sandia National Lab
Latest Tag
patch_15Jun2023
Modified
April 1, 2024
Compressed Size
561.38 MB
Multinode Support
Yes
Multi-Arch Support
Yes

LAMMPS

Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a software application designed for molecular dynamics simulations. It has the potentials for solid-state materials (metals, semiconductor), soft matter (biomolecules, polymers), and oarse-grained or mesoscopic systems. The main use case is atom scale particle modeling or, more generically, as a parallel particle simulator at the atomic, meson, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. Read more on the LAMMPS website.

System requirements

Before running the NGC LAMMPS container please ensure your system meets the following requirements.

  • One of the following container runtimes
  • One of the following NVIDIA GPU(s)
    • Pascal(sm60)
    • Volta (sm70)
    • Ampere (sm80)
    • Hopper (sm90)

x86_64

  • CPU with AVX2 instruction support
  • One of the following CUDA driver versions
    • r520 (>=.61.05)
    • >= 450.80.02

arm64

  • Marvell ThunderX2 CPU
  • CUDA driver version >= r460
Examples

The following examples demonstrate using the NGC LAMMPS container to run a standard Leanard-Jones 3D melt experiment.

The input file must first be downloaded. The environment variable BENCHMARK_DIR will be used throughout the example to refer to the directory containing the input file, in.lj.txt. Throughout this example the container version will be referenced as $TAG, replace this with the tag you wish to run.

wget https://lammps.sandia.gov/inputs/in.lj.txt
export BENCHMARK_DIR=$PWD

Although the lammps executable, lmp, may be called directly within the NGC LAMMPS container this example will utilize a convenience script, run_lammps.sh. This script will set common command line arguments needed for the example experiment. This helper script should be placed within the benchmark data directory.

cd $BENCHMARK_DIR
wget https://gitlab.com/NVHPC/ngc-examples/-/raw/master/lammps/single-node/run_lammps.sh
chmod +x run_lammps.sh

Running with nvidia-docker

cd $BENCHMARK_DIR
docker run --rm --gpus all --ipc=host -v $PWD:/host_pwd -w /host_pwd nvcr.io/hpc/lammps:$TAG ./run_lammps.sh

Note: Docker < v1.40

Docker versions below 1.40 must enable GPU support with --runtime nvidia.

docker run --rm --runtime nvidia --ipc=host -v $PWD:/host_pwd -w /host_pwd nvcr.io/hpc/lammps:$TAG ./run_lammps.sh

Running with Singularity

cd $BENCHMARK_DIR
singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/lammps:$TAG ./run_lammps.sh

Note: Singularity < v3.5

There is currently an issue in Singularity versions below v3.5 causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

LD_LIBRARY_PATH="" singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/lammps:$TAG ./run_lammps.sh

Running multi-node with Slurm and Singularity

Clusters running the Slurm resource manager and Singularity container runtime may launch parallel LAMMPS experiments directly through srun. The NGC LAMMPS container supports pmi2, which is available within most Slurm installations, as well as pmix3. A typical parallel experiment would take the following form.

srun --mpi=pmi2 [srun_flags] singularity run --nv [singularity_flags] lmp [lammps_flags]

An example Slrum batch script that may be modified for your specific cluster setup may be viewed here.

Suggested Reading

LAMMPS Manual

LAMMPS Benchmarking

Nvidia Docker