NGC | Catalog


Logo for LAMMPS
Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a software application designed for molecular dynamics simulations.
Sandia National Lab
Latest Tag
March 1, 2024
Compressed Size
561.38 MB
Multinode Support
Multi-Arch Support


Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a software application designed for molecular dynamics simulations. It has the potentials for solid-state materials (metals, semiconductor), soft matter (biomolecules, polymers), and oarse-grained or mesoscopic systems. The main use case is atom scale particle modeling or, more generically, as a parallel particle simulator at the atomic, meson, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. Read more on the LAMMPS website.

System requirements

Before running the NGC LAMMPS container please ensure your system meets the following requirements.

  • One of the following container runtimes
  • One of the following NVIDIA GPU(s)
    • Pascal(sm60)
    • Volta (sm70)
    • Ampere (sm80)
    • Hopper (sm90)


  • CPU with AVX2 instruction support
  • One of the following CUDA driver versions
    • r520 (>=.61.05)
    • >= 450.80.02


  • Marvell ThunderX2 CPU
  • CUDA driver version >= r460

The following examples demonstrate using the NGC LAMMPS container to run a standard Leanard-Jones 3D melt experiment.

The input file must first be downloaded. The environment variable BENCHMARK_DIR will be used throughout the example to refer to the directory containing the input file, in.lj.txt. Throughout this example the container version will be referenced as $TAG, replace this with the tag you wish to run.


Although the lammps executable, lmp, may be called directly within the NGC LAMMPS container this example will utilize a convenience script, This script will set common command line arguments needed for the example experiment. This helper script should be placed within the benchmark data directory.

chmod +x

Running with nvidia-docker

docker run --rm --gpus all --ipc=host -v $PWD:/host_pwd -w /host_pwd$TAG ./

Note: Docker < v1.40

Docker versions below 1.40 must enable GPU support with --runtime nvidia.

docker run --rm --runtime nvidia --ipc=host -v $PWD:/host_pwd -w /host_pwd$TAG ./

Running with Singularity

singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd docker://$TAG ./

Note: Singularity < v3.5

There is currently an issue in Singularity versions below v3.5 causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

LD_LIBRARY_PATH="" singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd docker://$TAG ./

Running multi-node with Slurm and Singularity

Clusters running the Slurm resource manager and Singularity container runtime may launch parallel LAMMPS experiments directly through srun. The NGC LAMMPS container supports pmi2, which is available within most Slurm installations, as well as pmix3. A typical parallel experiment would take the following form.

srun --mpi=pmi2 [srun_flags] singularity run --nv [singularity_flags] lmp [lammps_flags]

An example Slrum batch script that may be modified for your specific cluster setup may be viewed here.

Suggested Reading


LAMMPS Benchmarking

Nvidia Docker