Quantum ESPRESSO is an integrated suite of open-source computer codes for electronic-structure calculations and materials modeling at the nanoscale based on density-functional theory, plane waves, and pseudopotentials.
Quantum ESPRESSO has evolved into a distribution of independent and inter-operable codes in the spirit of an open-source project. The Quantum ESPRESSO distribution consists of a "historical" core set of components, a set of plug-ins that perform more advanced tasks, and several third-party packages designed to be inter-operable with the core components. Researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their codes or by implementing their ideas into existing codes.
Before running the NGC Quantum ESPRESSO container please ensure your system meets the following requirements.
The following examples demonstrate using the NGC Quantum ESPRESSO container to run the AUSURF112, Gold surface (112 atoms), DEISA pw benchmark.
The environment variable BENCHMARK_DIR
will be used throughout the example to refer to the directory containing the AUSURF112 input files.
mkdir ausurf
cd ausurf
wget https://repository.prace-ri.eu/git/UEABS/ueabs/-/raw/master/quantum_espresso/test_cases/small/Au.pbe-nd-van.UPF
wget https://repository.prace-ri.eu/git/UEABS/ueabs/-/raw/master/quantum_espresso/test_cases/small/ausurf.in
export BENCHMARK_DIR=${PWD}/ausurf
Although the Quantum ESPRESSO command line utilities may be called directly within the NGC Quantum ESPRESSO container this example will utilize a convenience script, run_qe.sh. This script will set common command line arguments needed for this example. This helper script should be placed within the benchmark data directory.
cd ${BENCHMARK_DIR}
wget https://gitlab.com/NVHPC/ngc-examples/-/raw/master/qe/single-node/run_qe.sh
chmod +x run_qe.sh
While this script attempts to set reasonable defaults for common HPC configurations additional tuning is required for maximum performance. The environment variable QE_GPU_COUNT
will modify the default behavior and may be set in the cotnainer to number of GPUs to use. The default is set to use all GPUs.
Several Quantum ESPRESSO images are available, depending on your needs. Set the following environment variable which will be used in the example below.
export QE_TAG={TAG}
Where {TAG}
is qe-6.8
or any other tag previously posted on NGC.
NGC supports the Docker runtime through the nvidia-docker plugin.
cd ${BENCHMARK_DIR}
docker run -it --rm --gpus all --ipc=host -v ${PWD}:/host_pwd -w /host_pwd nvcr.io/hpc/quantum_espresso:${QE_TAG} ./run_qe.sh
cd ${BENCHMARK_DIR}
singularity run --nv -B${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/quantum_espresso:${QE_TAG} ./run_qe.sh
There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH
to be incorrectly set within the container environment.
As a workaround The LD_LIBRARY_PATH
must be unset before invoking Singularity:
$ LD_LIBRARY_PATH="" singularity exec ...
Clusters running the Slurm resource manager and Singularity container runtime may launch parallel Quantum ESPRESSO experiments directly through srun
. The NGC Quantum ESPRESSO container supports pmi2
, which is available within most Slurm installations, as well as pmix3
. A typical parallel pw.x
experiment would take the following form.
srun --mpi=pmi2 [srun_flags] singularity run --nv [singularity_flags] pw.x [qe_flags]
An example Slrum batch script that may be modified for your specific cluster setup may be viewed here.