NGC | Catalog
Welcome Guest
CatalogContainersQuantum ESPRESSO

Quantum ESPRESSO

For copy image paths and more information, please view on a desktop device.
Logo for Quantum ESPRESSO

Description

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale based on density-functional theory, plane waves, and pseudopotentials.

Publisher

SISSA

Latest Tag

qe-7.1

Modified

September 7, 2022

Compressed Size

1.15 GB

Multinode Support

Yes

Multi-Arch Support

Yes

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of open-source computer codes for electronic-structure calculations and materials modeling at the nanoscale based on density-functional theory, plane waves, and pseudopotentials.

Quantum ESPRESSO has evolved into a distribution of independent and inter-operable codes in the spirit of an open-source project. The Quantum ESPRESSO distribution consists of a "historical" core set of components, a set of plug-ins that perform more advanced tasks, and several third-party packages designed to be inter-operable with the core components. Researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their codes or by implementing their ideas into existing codes.

System requirements

Before running the NGC Quantum ESPRESSO container please ensure your system meets the following requirements.

Container Runtimes

x86_64

  • CPU with AVX2 instruction support
  • Pascal(sm60), Volta(sm70), or Ampere (sm80) NVIDIA GPU(s)
  • CUDA driver version 460 or >=450.36.06, r418(>=40.04), r440(>=33.01)

arm64

  • Marvell ThunderX2 CPU
  • CUDA driver version 460 or >=450.36.06, r418(>=40.04), r440(>=33.01)

Running Quantum ESPRESSO Examples

The following examples demonstrate using the NGC Quantum ESPRESSO container to run the AUSURF112, Gold surface (112 atoms), DEISA pw benchmark.

Download Dataset

The environment variable BENCHMARK_DIR will be used throughout the example to refer to the directory containing the AUSURF112 input files.

mkdir ausurf
cd ausurf
wget https://repository.prace-ri.eu/git/UEABS/ueabs/-/raw/master/quantum_espresso/test_cases/small/Au.pbe-nd-van.UPF
wget https://repository.prace-ri.eu/git/UEABS/ueabs/-/raw/master/quantum_espresso/test_cases/small/ausurf.in
export BENCHMARK_DIR=${PWD}/ausurf

Although the Quantum ESPRESSO command line utilities may be called directly within the NGC Quantum ESPRESSO container this example will utilize a convenience script, run_qe.sh. This script will set common command line arguments needed for this example. This helper script should be placed within the benchmark data directory.

cd ${BENCHMARK_DIR}
wget https://gitlab.com/NVHPC/ngc-examples/-/raw/master/qe/single-node/run_qe.sh
chmod +x run_qe.sh

While this script attempts to set reasonable defaults for common HPC configurations additional tuning is required for maximum performance. The environment variable QE_GPU_COUNT will modify the default behavior and may be set in the cotnainer to number of GPUs to use. The default is set to use all GPUs.

Select Tag

Several Quantum ESPRESSO images are available, depending on your needs. Set the following environment variable which will be used in the example below.

export QE_TAG={TAG}

Where {TAG} is qe-6.8 or any other tag previously posted on NGC.

Running with nvidia-docker

NGC supports the Docker runtime through the nvidia-docker plugin.

cd ${BENCHMARK_DIR}
docker run -it --rm --gpus all --ipc=host -v ${PWD}:/host_pwd -w /host_pwd nvcr.io/hpc/quantum_espresso:${QE_TAG} ./run_qe.sh

Running with Singularity

cd ${BENCHMARK_DIR}
singularity run --nv -B${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/quantum_espresso:${QE_TAG} ./run_qe.sh

Note: Singularity 3.1.x - 3.5.x

There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

$ LD_LIBRARY_PATH="" singularity exec ...

Running multi-node with Slurm and Singularity

Clusters running the Slurm resource manager and Singularity container runtime may launch parallel Quantum ESPRESSO experiments directly through srun. The NGC Quantum ESPRESSO container supports pmi2, which is available within most Slurm installations, as well as pmix3. A typical parallel pw.x experiment would take the following form.

srun --mpi=pmi2 [srun_flags] singularity run --nv [singularity_flags] pw.x [qe_flags]

An example Slrum batch script that may be modified for your specific cluster setup may be viewed here.

Suggested Reading