NGC | Catalog

QMCPACK

Logo for QMCPACK
Description
QMCPACK is an open-source, high-performance electronic structure code that implements numerous Quantum Monte Carlo algorithms. Its main applications are electronic structure calculations of molecular, periodic 2D and periodic 3D solid-state systems.
Publisher
Oakridge National Lab
Latest Tag
v3.16.0
Modified
April 1, 2024
Compressed Size
1.4 GB
Multinode Support
No
Multi-Arch Support
Yes

QMCPACK

QMCPACK is an open-source, high-performance electronic structure code that implements numerous Quantum Monte Carlo algorithms. Its main applications are electronic structure calculations of molecular, periodic 2D and periodic 3D solid-state systems. Variational Monte Carlo (VMC), diffusion Monte Carlo (DMC) and a number of other advanced QMC algorithms are implemented. By directly solving the Schrodinger equation, QMC methods offer greater accuracy than methods such as density functional theory, but at a trade-off of much greater computational expense. Distinct from many other correlated many-body methods, QMC methods are readily applicable to both bulk (periodic) and isolated molecular systems. QMCPACK Manual

System requirements

Before running the NGC QMCPACK container please ensure your system meets the following requirements.

  • One of the following container runtimes
  • One of the following NVIDIA GPU(s)
    • Pascal(sm60)
    • Volta (sm70)
    • Ampere (sm80)
    • Hopper (sm90)

x86_64

QMCPACK has been optimized for x86_64_v3. See here for more information.

  • CPU with at least AVX2 instruction support
  • One of the following CUDA driver versions
    • r450 (>=.80.02)

arm64

  • Neoverse V1 CPU
  • CUDA driver version >= r450
Examples

The following examples demonstrate how to run the NGC QMCPACK container on systems ranging from single GPU workstations up to large-scale production HPC clusters.

Set the NGC QMCPACK container version:

export QMCPACK_VERSION=v3.16.0

Use the QMCPACK sample data fetching script to automatically download and extract the sample data.

wget -O - https://gitlab.com/NVHCP/ngc-examples/raw/master/qmcpack/v3.5.0/get_S32.sh | bash
cd ./S32_example

To run QMCPACK on the NiO-fcc-S32-dmc.xml sample data:

Running with nvidia-docker

docker run --gpus all -it --rm --privileged --net=host -v $PWD:/host_pwd -w /host_pwd nvcr.io/hpc/qmcpack:$QMCPACK_VERSION mpirun -np 2 qmcpack /host_pwd/NiO-fcc-S32-dmc.xml

Note: Docker < v1.40

Docker versions below 1.40 must enable GPU support with --runtime nvidia.

docker run --runtime nvidia -it --rm --privileged --net=host -v $PWD:/host_pwd -w /host_pwd nvcr.io/hpc/qmcpack:$QMCPACK_VERSION mpirun -np 2 qmcpack /host_pwd/NiO-fcc-S32-dmc.xml

Note: Docker <= 20.xx.xx

There is currently a bug in older Docker versions with newer Ubuntu images, using the latest tag of QMCPACK requires the --privileged flag to run the container, or an update to Docker.

Running with Singularity

singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/qmcpack:$QMCPACK_VERSION mpirun -np 2 qmcpack /host_pwd/NiO-fcc-S32-dmc.xml

Note: Singularity < v3.5

There is currently an issue in Singularity versions below v3.5 causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

LD_LIBRARY_PATH="" singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/qmcpack:$QMCPACK_VERSION mpirun -np 2 qmcpack /host_pwd/NiO-fcc-S32-dmc.xml

Running multi-node with Slurm and Singularity

Clusters running the Slurm resource manager and Singularity container runtime may launch parallel QMCPACK experiments directly through srun. The NGC QMCPACK container supports pmi2, which is available within most Slurm installations, as well as pmix3. A typical parallel experiment would take the following form.

$ srun --mpi=pmi2 [srun_flags] singularity run --nv [singularity_flags] docker://nvcr.io/hpc/qmcpack:$QMCPACK_VERSION <input.xml>

Suggested Reading