BigDFT is a DFT massively parallel electronic structure code using a wavelet basis set with the capability to use a linear scaling method. Wavelets form a real space basis set distributed on an adaptive mesh (two levels of resolution in our implementation). GTH or HGH pseudopotentials are used to remove the core electrons.
BigDFT is available in ABINIT v5.5 and higher but can also be downloaded in a standalone version from the website. Thanks to the developers’ Poisson solver based on a Green function formalism, periodic systems, surfaces and isolated systems can be simulated with explicit boundary conditions. The Poisson solver can also be downloaded and used independently and is integrated in ABINIT, Octopus and CP2K.
The code is free software, available under GNU-GPL license and the BigDFT developer community encourages anyone willing to contribute to join the team. The code, tutorials, and documentation are available on BigDFT site.
Before running the NGC BigDFT container please ensure your system meets the following requirements.
BigDFT generally doesn't take any executable arguments. Instead, input files are taken from the current working directory implicitly, specified by name. Output can be redirected as usual, using standard UNIX or mpirun mechanisms.
$ mpirun -np {num_procs} bigdft > log
The following examples demonstrate how to run the NGC BigDFT container under supported container runtimes.
The following modes of running are supported under nvidia-docker
:
DOCKER
will be used to launch processes within the NGC BigDFT container using the nvidia-docker
runtime:
$ export DOCKER="nvidia-docker run -it --rm -v $(pwd):/host_pwd -w /host_pwd nvcr.io/hpc/bigdft:cuda10-ubuntu1804-ompi4-mkl"
Where:
DOCKER
: alias used to store the base Docker commandrun
: specifies the mode of execution-it
: runs the container in an interactive tty shell--rm
: makes the container instance ephemeral (does not save on exit)-v $(pwd):/host_pwd
: bind mounts the current working directory into the container at /host_pwd
-w /host_pwd
: sets the inital working directory in the container to /host/pwd
nvcr.io/hpc/bigdft:cuda10-ubuntu1804-ompi4-mkl
: URI of the latest NGC BigDFT containerKeep in mind that DOCKER
will be set to bind mount
To run the BigDFT container from the command line, mount the desired input files into the container. For example, to
mount the current working directory into the container at /host_pwd/
and run BigDFT there:
$ nvidia-docker run -it --rm -v $(pwd):/host_pwd -w /host_pwd nvcr.io/hpc/bigdft:cuda10-ubuntu1804-ompi4-mkl bigdft
Where:
run
: specifies the mode of execution-it
: runs the container in an interactive tty shell--rm
: makes the container instance ephemeral (does not save on exit)-v $(pwd):/host_pwd
: bind mounts the current working directory into the container at /host_pwd
-w /host_pwd
: sets the inital working directory in the container to /host/pwd
nvcr.io/hpc/bigdft:cuda10-ubuntu1804-ompi4-mkl
: URI of the latest NGC BigDFT containerbigdft
: the BigDFT executableNote that the current working directory of the host must have read/write access for other
in order for the container
to access it via /host_pwd
. Use chmod o+rw .
to grant other
access to the current directory
BigDFT requires input data to be present in the current directory. Example data is available within the container; to copy the FeHyb example data into the host's current directory, use:
$ nvidia-docker run -it --rm -v $(pwd):/host_pwd -w /host_pwd nvcr.io/hpc/bigdft:cuda10-ubuntu1804-ompi4-mkl cp -r /docker/FeHyb/GPU /host_pwd
$ cd GPU
To run the container interactively, execute /bin/bash
with the container:
$ nvidia-docker run -it --rm -v $(pwd):/host_pwd -w /host_pwd nvcr.io/hpc/bigdft:cuda10-ubuntu1804-ompi4-mkl /bin/bash
Where:
run
: specifies the mode of execution-it
: runs the container in an interactive tty shell--rm
: makes the container instance ephemeral (does not save on exit)-v $(pwd):/host_pwd
: bind mounts the current working directory into the container at /host_pwd
-w /host_pwd
: sets the inital working directory in the container to /host/pwd
nvcr.io/hpc/bigdft:cuda10-ubuntu1804-ompi4-mkl
: URI of the latest NGC BigDFT container/bin/bash
: starts an interactive Bash shellTo run BigDFT from the interactive shell, change to the directory containing the desired input data and run:
$ bigdft
Recall that the DOCKER
alias binds the host's present working directory to /host_pwd
when the container starts,
which enables input from and output to the host's filesystem. Note that the current working directory of the host must
have read/write access for other
in order for the container to access it via /host_pwd
.
Use chmod o+rw .
to grant other
access to the current directory
Sample input data is also available within the container, including FeHyb data at /ContainerXp/FeHyb/GPU
.
After the computation, output can be found in the log.yaml
file, and timings in time.yaml
. These files can be copied
to /host_pwd
for availability on the host after exiting the container.
The NGC BigDFT container can also be controlled remotely via an interactive Jupyter interface. To start the
Jupyter server, simply invoke the container with nvidia-docker
:
nvidia-docker run -p 8888:8888 -it --rm -v $(pwd):/results nvcr.io/hpc/bigdft:cuda10-ubuntu1804-ompi4-mkl
This starts a Jupyter web interface on port 8888
.
The default password of the Jupyter web interface is bigdft
.
More documentation can be found on the BigDFT Documentation Webpage.
OpenMPI is available within the NGC BigDFT container for multi-GPU utilization.
In order to run the NGC BigDFT container with OpenMPI, use:
$ nvidia-docker run -it --rm -v $(pwd):/host_pwd -w /host_pwd --ipc=host nvcr.io/hpc/bigdft:cuda10-ubuntu1804-ompi4-mkl mpirun -n <n> bigdft
Where:
run
: specifies the mode of execution-it
: allows the container to use the host network devices (necessary to connect to the Kipoi API)--rm
: makes the container instance ephemeral (does not save on exit)-v $(pwd):/host_pwd
: bind mounts the current working directory into the container at /host_pwd
-w /host_pwd
: sets the inital working directory in the container to /host/pwd
--ipc=host
: sets the inter-process communication methodnvcr.io/hpc/bigdft:cuda10-ubuntu1804-ompi4-mkl
: URI of the latest NGC BigDFT container-n <n>
: sets the number of MPI processes to <n>
It is recommended to set the number of MPI processes, <n>
, equal to the number of GPUs available on the host.
Save the NGC BigDFT container as a local Singularity image file:
$ singularity build bigdft_cuda9-ubuntu1604-mvapich2_gdr-mkl.simg docker://nvcr.io/hpc/bigdft:cuda10-ubuntu1804-ompi4-mkl
This command saves the container in the current directory as bigdft_cuda9-ubuntu1604-mvapich2_gdr-mkl.simg
In order to pull NGC images with singularity
version 2.x and earlier, NGC container registry authentication credentials are required.
To set your NGC container registry authentication credentials:
$ export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
$ export SINGULARITY_DOCKER_PASSWORD=<NVIDIA NGC Cloud Services API key>
More information describing how to obtain and use your NVIDIA NGC Cloud Services API key can be found here.
This mode of running is suitable for interactive execution from a local workstation containing one or more GPUs. There are no requirements other than those stated in the System Requirements section.
####### Important Note for Amazon Machine Image users:
Amazon Machine Images on Amazon Web Service have a default root umask of 077
. Singularity must be installed with a umask of 022
to run properly. To (re)install Singularity with correct permissions:
$ umask 0022
$ umask 0077
This causes installed Singularity files to have permission 0755
instead of the default 0700
.
Note that the umask
command only applies changes to the current shell. Use umask
and install Singularity from the same shell session.
To run the BigDFT container from the command line, mount the desired input files into the container. For example, to
mount the current working directory into the container at /host_pwd/
and run BigDFT there:
$ singularity run --nv -B $(pwd):/host_pwd --pwd /host_pwd bigdft_cuda9-ubuntu1604-mvapich2_gdr-mkl.simg bigdft
Where:
exec
: specifies the mode of execution--nv
: exposes the host GPUs to the container-B $(pwd):/host_pwd
: bind mounts the current working directory into the container at /host_pwd
--pwd /host_pwd
: sets the working directory to /host_pwd
when the container startsbigdft_cuda9-ubuntu1604-mvapich2_gdr-mkl.simg
: path of the saved Singularity imageThe above command requires input data to be present in the host's current directory. Example data is available within the container; to copy the FeHyb example data into the host's current directory, use:
$ singularity exec --nv -B $(pwd):/host_pwd --pwd /host_pwd bigdft_cuda9-ubuntu1604-mvapich2_gdr-mkl.simg /bin/bash -c "cp -r /docker/FeHyb/GPU /host_pwd"
To run the container interactively, execute /bin/bash
with the container:
$ singularity exec --nv -B $(pwd):/host_pwd --pwd /host_pwd bigdft_cuda9-ubuntu1604-mvapich2_gdr-mkl.simg /bin/bash
Where:
exec
: specifies the mode of execution--nv
: exposes the host GPUs to the container-B $(pwd):/host_pwd
: bind mounts the current working directory into the container at /host_pwd
--pwd /host_pwd
: sets the working directory to /host_pwd
when the container startsbigdft_cuda9-ubuntu1604-mvapich2_gdr-mkl.simg
: path of the saved Singularity image/bin/bash
: starts an interactive Bash shellTo run BigDFT from the interactive shell, change to the directory containing the desired input data and run:
$ bigdft
Recall that the -B $(pwd:/host_pwd
argument binds the host's present working directory to /host_pwd
when the container starts,
which enables input from and output to the host's filesystem.
Sample input data is also available within the container, including FeHyb data at /ContainerXp/FeHyb/GPU
.
After the computation, output can be found in the log.yaml
file, and timings in time.yaml
. These files can be copied
to /host_pwd
for availability on the host after exiting the container.
The NGC BigDFT container can also be controlled remotely via an interactive Jupyter interface. To start the
Jupyter server, simply invoke the container with singularity run
:
$ singularity run --nv -B $(pwd):/host_pwd --pwd /host_pwd bigdft_cuda9-ubuntu1604-mvapich2_gdr-mkl.simg
This starts a Jupyter web interface on port 8888
.
The default password of the Jupyter web interface is bigdft
.
More documentation can be found on the BigDFT Documentation Webpage.
OpenMPI is available within the NGC BigDFT container for multi-GPU utilization.
In order to run the NGC BigDFT container with OpenMPI, use:
$ singularity exec --nv -B $(pwd):/host_pwd --pwd /host_pwd bigdft_cuda9-ubuntu1604-mvapich2_gdr-mkl.simg mpirun -n <n> bigdft
Where:
exec
: specifies the mode of execution--nv
: exposes the host GPUs to the container-B $(pwd):/host_pwd
: bind mounts the current working directory into the container at /host_pwd
--pwd /host_pwd
: sets the working directory to /host_pwd
when the container startsbigdft_cuda9-ubuntu1604-mvapich2_gdr-mkl.simg
: path of the saved Singularity image-n <n>
: sets the number of MPI processes to <n>
It is recommended to set the number of MPI processes, <n>
, equal to the number of GPUs available on the host.
Clusters with a local compatible OpenMPI installation may launch the NGC BigDFT container using the host provided mpirun
or mpiexec
launcher.
To use the cluster provided mpirun
command to launch the NGC BigDFT container, OpenMPI/3.0.2
or newer is required.
Running with mpirun
maintains tight integration with the resource manager.
Launch BigDFT within the container, using mpirun
:
$ mpirun -n <N> --map-by ppr:<num_proc>:socket singularity exec --nv -B $(pwd):/host_pwd --pwd /host_pwd bigdft.simg
Where:
<N>
: MPI process count--map-by ppr:<num_proc>:socket
: Distributes MPI ranks to each CPU socketexec
: specifies the mode of executionIt is recommended to set the number of MPI processes, <N>
, equal to the number of GPUs available, and <num_proc>
to
the number of GPUs with affinity to each CPU socket.
The NGC BigDFT container allows the user to launch parallel MPI jobs from fully within the container. This mode has the least host requirements, but does necessitate additional setup steps, as described below.
The internal container OpenMPI installation requires an OpenMPI hostfile to specify the addresses of all nodes in the cluster. The OpenMPI hostfile takes the following form:
<hostname_1>
<hostname_2>
...
<hostname_n>
Generation of this nodelist file via bash script will vary from cluster to cluster. Common examples include:
HOSTFILE=".hostfile.${SLURM_JOB_ID}"
for host in $(scontrol show hostnames); do
echo "${host}" >> ${HOSTFILE}
done
HOSTFILE=$(pwd)/.hostfile.${PBS_JOBID}
for host in $(uniq ${PBS_NODEFILE}); do
echo "${host}" >> ${HOSTFILE}
done
Additionally, mpirun
must be configured to start the OpenMPI orted
process within the container runtime. Set environment variables so that mpirun
starts the OpenMPI orted
process within the container:
$ export SIMG=bigdft.simg
$ export OMPI_MCA_plm=rsh
$ export OMPI_MCA_plm_rsh_args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR'
$ export OMPI_MCA_orte_launch_agent="${SINGULARITY} /usr/bin/orted"
To launch BigDFT using mpirun
:
$ singularity exec --nv -B $(pwd):/host_pwd --pwd /host_pwd bigdft.simg mpirun --hostfile <hostfile> -n <N> --map-by ppr:<num_proc>:socket bigdft
Where:
SINGULARITY
: Singularity alias defined above<hostfile>
: textfile list of compute hosts<N>
: MPI process count--map-by ppr:<num_proc>:socket
: Distributes MPI ranks to each CPU socketBigDFT includes two examples, each with CPU and GPU variants. These examples can be run with the instructions provided above.
/ContainerXp/FeHyb/NOGPU
: Directory containing CPU only FeHyb
example/ContainerXp/FeHyb/GPU
: Directory containing GPU accelerated FeHyb
example/ContainerXp/H2O-32/CPU
: Directory containing CPU only H2O
example/ContainerXp/H2O-32/GPU
: Directory containing GPU accelerated H2O
exampleThe FeHyb
CPU example includes an additional reference log file, log.ref.yaml
, which can be used to check the
correctness of theFeHyb
output. To check the FeHyb
output log.yaml
, run the following command from an interactive
shell within the container:
python /usr/local/bigdft/lib/python2.7/site-packages/fldiff_yaml.py -d /path/to/log.yaml -r /docker/FeHyb/NOGPU/log.ref.yaml -t /docker/FeHyb/NOGPU/tols-BigDFT.yaml
Where:
/usr/local/bigdft/lib/python2.7/site-packages/fldiff_yaml.py
: the path of the yaml
file comparison tool-d /path/to/log.yaml
: specifies the path to a FeHyb
output log.yaml
data file to check-r /docker/FeHyb/NOGPU/log.ref.yaml
: path to the reference FeHyb
output file-t /docker/FeHyb/NOGPU/tols-BigDFT.yaml
: path to the FeHyb
comparison tolerances fileThe correct output should include:
Test succeeded: True
BigDFT documentation.
The BigDFT Python interface is documented here.