Linux / arm64
Linux / amd64
By using this container image, you agree to the NVIDIA HPC SDK End-User License Agreement
The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries and tools essential to maximizing developer productivity and the performance and portability of HPC applications. The NVIDIA HPC SDK C, C++, and Fortran compilers support GPU acceleration of HPC modeling and simulation applications with standard C++ and Fortran, OpenACC directives, and CUDA. GPU-accelerated math libraries maximize performance on common HPC algorithms, and optimized communications libraries enable standards-based multi-GPU and scalable systems programming. Performance profiling and debugging tools simplify porting and optimization of HPC applications, and containerization tools enable easy deployment on-premises or in the cloud.
Key features of the NVIDIA HPC SDK for Linux include:
Before running the NVIDIA HPC SDK NGC container, please ensure that your system meets the following requirements.
--gpus
option,
or Singularity version 3.4.1 or laterWhen using the "cuda_multi" images, the NVIDIA HPC SDK will automatically choose among CUDA versions 11.0, 11.8, or 12.2 based on your installed driver. See the NVIDIA HPC SDK User's Guide for more information on using different CUDA Toolkit versions.
Multiarch containers for Arm (aarch64) and x86_64 are available for select tags starting with version 21.7.
Please see the NVIDIA HPC SDK User's Guide for getting started with the HPC SDK.
The HPC SDK Container Guide is a resource for using the HPC SDK with containers.
For a general guide on pulling and running containers, see Pulling A Container image and Running A Container in the NGC Container User Guide.
Several source code examples are available in the container at
/opt/nvidia/hpc_sdk/Linux_x86_64/23.7/examples
.
To access the OpenACC examples in an interactive session, use:
$ docker run --gpus all -it --rm nvcr.io/nvidia/nvhpc:23.7-devel-cuda_multi-ubuntu20.04
$ cd /opt/nvidia/hpc_sdk/Linux_x86_64/23.7/examples/OpenACC/samples
$ make all
More detailed instructions on using the HPC SDK NGC container in Docker and Singularity can be found below.
The instructions below assume Docker 19.03 or later. If using and
older version of Docker with the
nvidia-docker plugin,
substitute docker run --gpus all
below with nvidia-docker run
.
The following command mounts the current directory to /host_pwd
and
starts an interactive terminal inside the container:
$ docker run --gpus all -it --rm -v $(pwd):/host_pwd -w /host_pwd nvcr.io/nvidia/nvhpc:23.7-devel-cuda_multi-ubuntu20.04
Where:
--gpus all
: use all available GPUs--rm
: makes the container ephemeral (does not save changes to the image
on exit)-it
: allocates an interactive tty shell for the container-v $(pwd):/host_pwd
: bind mounts the current working directory of the host
into the container at /host_pwd
-w /host_pwd
: sets the initial directory of the container to /host_pwd
nvcr.io/nvidia/nvhpc:23.7-devel-cuda_multi-ubuntu20.04
: URI of the latest HPC SDK NGC
container imageTo invoke NVIDIA HPC compilers during an interactive session, run make
or
call the compilers directly (nvfortran
, nvc
, nvc++
) on files in
the mounted directory.
For example, to run the nvfortran
compiler:
$ cd /host_pwd
$ nvfortran -static-nvidia -gpu -o my_output_file my_source_file.f95
Where:
nvfortran
: the NVIDIA Fortran compiler-static-nvidia
: compiles for and links to the static version of the NVIDIA
runtime libraries-gpu
: compiles for NVIDIA GPUs. NVIDIA compilers will automatically
detect GPU information and build accordingly-gpu=cc80
(for A100), or
-gpu=cc70,cc80
(for V100 and A100)-o my_output_file
: names the compiled output filemy_source_file.f95
: path to a Fortran source fileTo use the NVIDIA HPC compilers with a Makefile in the current directory:
$ docker run --gpus all --rm -it -v $(pwd):/host_pwd -w /host_pwd nvcr.io/nvidia/nvhpc:23.7-devel-cuda_multi-ubuntu20.04 make
Where:
--gpus all
: use all available GPUs--rm
: makes the container ephemeral (does not save changes to the image
on exit)-it
: allocates an interactive tty shell for the container-v $(pwd):/host_pwd
: bind mounts the current working directory of the host
into the container at /host_pwd
-w /host_pwd
: sets the initial directory of the container to /host_pwd
nvcr.io/nvidia/nvhpc:23.7-devel-cuda_multi-ubuntu20.04
: URI of the latest NVIDIA HPC SDK
container imageIt is also possible to call the NVIDIA HPC compilers directly (nvfortran
,
nvc
, nvc++
) on an appropriate source file.
For example, to use the nvfortran
compiler:
$ docker run --gpus all --rm -it -v $(pwd):/host_pwd -w /host_pwd nvcr.io/nvidia/nvhpc:23.7-devel-cuda_multi-ubuntu20.04 nvfortran -gpu -o my_output_file my_source_file.f95
Where:
--gpus all
: use all available GPUs--rm
: makes the container ephemeral (does not save changes to the image
on exit)-it
: allocates an interactive tty shell for the container-v $(pwd):/host_pwd
: bind mounts the current working directory of the host
into the container at /host_pwd
-w /host_pwd
: sets the initial directory of the container to /host_pwd
nvcr.io/nvidia/nvhpc:23.7-devel-cuda_multi-ubuntu20.04
: URI of the latest NVIDIA HPC SDK
NGC container imagenvfortran
: the NVIDIA Fortran compiler-gpu
: compiles for NVIDIA GPUs. NVIDIA compilers will automatically
detect GPU information and build accordingly-gpu=cc80
(for A100),
-gpu=cc70,cc80
(for V100 and A100)-o my_output_file
: names the compiled output filemy_source_file.f95
: path to a Fortran source fileRuntime container images are provided to redistribute applications built with the HPC SDK as new container images. Based on the CUDA Toolkit version used to build the application, select the appropriate runtime HPC SDK container image.
The instructions below assume Singularity 3.4.1 or later.
Save the NVIDIA HPC SDK NGC container as a local Singularity image file:
$ singularity build nvhpc_23.7_devel.sif docker://nvcr.io/nvidia/nvhpc:23.7-devel-cuda_multi-ubuntu20.04
This command saves the container in the current directory as
nvhpc_23.7_devel.sif
.
To invoke an interactive shell, run /bin/bash
within the container:
The following command starts an interactive terminal inside the container:
$ singularity shell --nv nvhpc_23.7_devel.sif
Where:
shell
: specifies the mode of execution--nv
: exposes the host GPUs to the containernvhpc_23.7_devel.sif
: path to the Singularity image built
aboveTo invoke NVIDIA HPC compilers during an interactive session, run make
or
call the compilers directly (nvfortran
, nvc
, nvc++
) on files in
the mounted directory.
For example, to run the nvfortran
compiler:
$ nvfortran -static-nvidia -gpu -o my_output_file my_source_file.f95
Where:
nvfortran
: the NVIDIA Fortran compiler-static-nvidia
: compiles for and links to the static version of the NVIDIA
runtime libraries-gpu
: compiles for NVIDIA GPUs. NVIDIA compilers will automatically
detect GPU information and build accordingly-gpu=cc80
(for A100), or
-gpu=cc70,cc80
(for V100 and A100)-o my_output_file
: names the compiled output filemy_source_file.f95
: path to a Fortran source fileTo use the NVIDIA HPC compilers with a Makefile in the current directory:
$ singularity exec --nv nvhpc_23.7_devel.sif make
Where:
exec
: specifies the mode of execution--nv
: exposes the host GPUs to the containernvhpc_23.7_devel.sif
: path to the Singularity image built
aboveIt is also possible to call the NVIDIA HPC compilers directly (nvfortran
,
nvc
, nvc++
) on an appropriate source file.
For example, to use the nvfortran
compiler:
$ singularity exec --nv nvhpc_23.7_devel.sif nvfortran -gpu -o my_output_file my_source_file.f95
Where:
exec
: specifies the mode of execution--nv
: exposes the host GPUs to the containernvhpc_23.7_devel.sif
: path to the Singularity image built
abovenvfortran
: the NVIDIA Fortran compiler-gpu
: compiles for NVIDIA GPUs. NVIDIA compilers will automatically
detect GPU information and build accordingly-gpu=cc80
(for A100), or
-gpu=cc70,cc80
(for V100 and A100)-o my_output_file
: names the compiled output filemy_source_file.f95
: path to a Fortran source fileRuntime container images are provided to redistribute applications built with the HPC SDK as new container images. Based on the CUDA Toolkit version used to build the application, select the appropriate runtime HPC SDK container image.