NGC Catalog
CLASSIC
Welcome Guest
Containers
Quantum ESPRESSO

Quantum ESPRESSO

For copy image paths and more information, please view on a desktop device.
Logo for Quantum ESPRESSO
Description
Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale based on density-functional theory, plane waves, and pseudopotentials.
Publisher
SISSA
Latest Tag
qe-7.3.1
Modified
May 3, 2025
Compressed Size
1.61 GB
Multinode Support
Yes
Multi-Arch Support
Yes

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale based on density-functional theory, plane waves, and pseudopotentials.

Quantum ESPRESSO has evolved into a distribution of independent and inter-operable codes in the spirit of an open-source project. The Quantum ESPRESSO distribution consists of a "historical" core set of components. And set of plug-ins that perform more advanced tasks, plus several third-party packages designed to be inter-operable with the core components. Researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their codes or by implementing their ideas into existing codes.

System requirements

Before running the NGC Quantum ESPRESSO container please ensure your system meets the following requirements.

  • One of the following container runtimes
    • nvidia-docker
    • Singularity >= 3.1
  • One of the following NVIDIA GPU(s)
    • Pascal(sm60)
    • Volta (sm70)
    • Ampere (sm80)
    • Hopper (sm90)

x86_64

  • CPU with AVX2 instruction support
  • One of the following CUDA driver versions
    • >= 550
    • r535 (>=.54.03)
    • r470 (>=.57.02)

arm64

  • Marvell ThunderX2 CPU
  • One of the following CUDA driver versions
    • >=525.60.13

Examples

The following examples demonstrate using the NGC Quantum ESPRESSO container to run the AUSURF112, Gold surface (112 atoms), DEISA pw benchmark.

The environment variable BENCHMARK_DIR will be used throughout the example to refer to the directory containing the AUSURF112 input files.

mkdir ausurf
cd ausurf
wget https://repository.prace-ri.eu/git/UEABS/ueabs/-/raw/master/quantum_espresso/test_cases/small/Au.pbe-nd-van.UPF
wget https://repository.prace-ri.eu/git/UEABS/ueabs/-/raw/master/quantum_espresso/test_cases/small/ausurf.in
export BENCHMARK_DIR=${PWD}/ausurf

Although the Quantum ESPRESSO command line utilities may be called directly within the NGC Quantum ESPRESSO container this example will utilize a convenience script, run_qe.sh. This script will set common command line arguments needed for the example AUSURF experiment. This helper script should be placed within the benchmark data directory.

cd ${BENCHMARK_DIR}
wget https://gitlab.com/NVHPC/ngc-examples/-/raw/master/qe/single-node/run_qe.sh
chmod +x run_qe.sh

While this script attempts to set reasonable defaults for common HPC configurations additional tuning is required for maximum performance. The following environment variables may be set in the container environment to modify the default behavior.

  • QE_GPU_COUNT: Set the number of GPUs to use, defaults to all GPUs

Running with nvidia-container-toolkit

cd ${BENCHMARK_DIR}
docker run -it --rm --gpus all --runtime=nvidia --ipc=host -v ${PWD}:/host_pwd -w /host_pwd nvcr.io/hpc/quantum_espresso:v7.3.1 ./run_qe.sh

Note: Docker < v1.40

Docker versions below 1.40 must enable GPU support with --runtime nvidia.

docker run -it --rm --runtime nvidia --ipc=host -v ${PWD}:/host_pwd -w /host_pwd nvcr.io/hpc/quantum_espresso:v7.3.1 ./run_qe.sh

Running with Singularity

cd ${BENCHMARK_DIR}
singularity run --nv -B${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/quantum_espresso:v7.3.1 ./run_qe.sh

Note: Singularity < v3.5

There is currently an issue in Singularity versions below v3.5 causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

LD_LIBRARY_PATH="" singularity run --nv -B${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/quantum_espresso:v7.3.1 ./run_qe.sh

Running multi-node with Slurm and Singularity

Clusters running the Slurm resource manager and Singularity container runtime may launch parallel Quantum ESPRESSO experiments directly through srun. The NGC Quantum ESPRESSO container supports pmix, which is available within most Slurm installations. A typical parallel pw.x experiment would take the following form.

srun --mpi=pmix [srun_flags] singularity run --nv [singularity_flags] pw.x [qe_flags]

An example Slurm batch script that may be modified for your specific cluster setup may be viewed here.

Suggested Reading

  • Quantum Espresso Manual
  • Quantum Espresso Tutorials
  • nvidia-container-toolkit
  • Docker GPU runtime options