Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale based on density-functional theory, plane waves, and pseudopotentials.
Quantum ESPRESSO has evolved into a distribution of independent and inter-operable codes in the spirit of an open-source project. The Quantum ESPRESSO distribution consists of a "historical" core set of components. And set of plug-ins that perform more advanced tasks, plus several third-party packages designed to be inter-operable with the core components. Researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their codes or by implementing their ideas into existing codes.
Before running the NGC Quantum ESPRESSO container please ensure your system meets the following requirements.
The following examples demonstrate using the NGC Quantum ESPRESSO container to run the AUSURF112, Gold surface (112 atoms), DEISA pw benchmark.
The environment variable BENCHMARK_DIR
will be used throughout the example to refer to the directory containing the AUSURF112 input files.
mkdir ausurf
cd ausurf
wget https://repository.prace-ri.eu/git/UEABS/ueabs/-/raw/master/quantum_espresso/test_cases/small/Au.pbe-nd-van.UPF
wget https://repository.prace-ri.eu/git/UEABS/ueabs/-/raw/master/quantum_espresso/test_cases/small/ausurf.in
export BENCHMARK_DIR=${PWD}/ausurf
Although the Quantum ESPRESSO command line utilities may be called directly within the NGC Quantum ESPRESSO container this example will utilize a convenience script, run_qe.sh. This script will set common command line arguments needed for the example AUSURF experiment. This helper script should be placed within the benchmark data directory.
cd ${BENCHMARK_DIR}
wget https://gitlab.com/NVHPC/ngc-examples/-/raw/master/qe/single-node/run_qe.sh
chmod +x run_qe.sh
While this script attempts to set reasonable defaults for common HPC configurations additional tuning is required for maximum performance. The following environment variables may be set in the container environment to modify the default behavior.
QE_GPU_COUNT
: Set the number of GPUs to use, defaults to all GPUscd ${BENCHMARK_DIR}
docker run -it --rm --gpus all --runtime=nvidia --ipc=host -v ${PWD}:/host_pwd -w /host_pwd nvcr.io/hpc/quantum_espresso:v7.3.1 ./run_qe.sh
Docker versions below 1.40 must enable GPU support with --runtime nvidia
.
docker run -it --rm --runtime nvidia --ipc=host -v ${PWD}:/host_pwd -w /host_pwd nvcr.io/hpc/quantum_espresso:v7.3.1 ./run_qe.sh
cd ${BENCHMARK_DIR}
singularity run --nv -B${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/quantum_espresso:v7.3.1 ./run_qe.sh
There is currently an issue in Singularity versions below v3.5 causing the LD_LIBRARY_PATH
to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH
must be unset before invoking Singularity:
LD_LIBRARY_PATH="" singularity run --nv -B${PWD}:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/quantum_espresso:v7.3.1 ./run_qe.sh
Clusters running the Slurm resource manager and Singularity container runtime may launch parallel Quantum ESPRESSO experiments directly through srun
. The NGC Quantum ESPRESSO container supports pmix
, which is available within most Slurm installations. A typical parallel pw.x
experiment would take the following form.
srun --mpi=pmix [srun_flags] singularity run --nv [singularity_flags] pw.x [qe_flags]
An example Slurm batch script that may be modified for your specific cluster setup may be viewed here.