The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries and tools essential to maximizing developer productivity and the performance and portability of HPC applications.
Latest Tag
May 23, 2024
Compressed Size
10.4 GB
Multinode Support
Multi-Arch Support
24.5-devel-cuda_multi-ubuntu22.04 (Latest) Security Scan Results

Linux / arm64

Sorry, your browser does not support inline SVG.

Linux / amd64

Sorry, your browser does not support inline SVG.

By using this container image, you agree to the NVIDIA HPC SDK End-User License Agreement


The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries and tools essential to maximizing developer productivity and the performance and portability of HPC applications. The NVIDIA HPC SDK C, C++, and Fortran compilers support GPU acceleration of HPC modeling and simulation applications with standard C++ and Fortran, OpenACC directives, and CUDA. GPU-accelerated math libraries maximize performance on common HPC algorithms, and optimized communications libraries enable standards-based multi-GPU and scalable systems programming. Performance profiling and debugging tools simplify porting and optimization of HPC applications, and containerization tools enable easy deployment on-premises or in the cloud.

Key features of the NVIDIA HPC SDK for Linux include:

  • Support for NVIDIA Hopper architecture GPUs
  • Support for NVIDIA Ampere Architecture GPUs with FP16, TF32 and FP64 tensor cores and MIG
  • Support for NVIDIA Volta tensor core GPUs and NVIDIA Pascal GPUs
  • Supported on CUDA 12.4, 12.3, 12.2, 12.1, 12.0, 11.8, 11.7, 11.6, 11.5, 11.4, 11.3, 11.2, 11.1, and 11.0
  • Support for x86-64, OpenPOWER and Arm Server multicore CPUs
  • NVC++ ISO C++17 compiler with Parallel Algorithms acceleration on GPUs, OpenACC and OpenMP
  • NVFORTRAN ISO Fortran 2003 compiler with array intrinsics acceleration on GPUs, CUDA Fortran, OpenACC and OpenMP
  • NVC ISO C11 compiler with OpenACC and OpenMP
  • NVCC NVIDIA CUDA C++ compiler
  • cuBLAS GPU-accelerated basic linear algebra subroutine (BLAS) library
  • cuSOLVER GPU-accelerated dense and sparse direct solvers
  • cuSPARSE GPU-accelerated BLAS for sparse matrices
  • cuFFT GPU-accelerated library for Fast Fourier Transforms
  • cuTENSOR GPU-accelerated tensor linear algebra library
  • cuRAND GPU-accelerated random number generation (RNG)
  • NVIDIA Performance Libraries (NVPL) for HPC math operations on NVIDIA CPUs
  • Thrust GPU-accelerated library of C++ parallel algorithms and data structures
  • CUB cooperative threadblock primitives and utilities for CUDA kernel programming
  • libcu++ opt-in heterogeneous CUDA C++ Standard Library for NVCC
  • NCCL library for fast multi-GPU/multi-node collective communications
  • NVSHMEM library for fast GPU memory-to-memory transfers (OpenSHMEM compatible)
  • Open MPI GPU-aware message passing interface library
  • NVIDIA Nsight Systems interactive HPC applications performance profiler
  • NVIDIA Nsight Compute interactive GPU compute kernel performance profiler

System Requirements

Before running the NVIDIA HPC SDK NGC container, please ensure that your system meets the following requirements.

  • Pascal (sm60), Volta (sm70), Turing (sm75), Ampere (sm80), or Hopper (sm90) NVIDIA GPU(s)
  • CUDA driver version >= 450.36.06
  • Docker 19.03 or later which includes support for the --gpus option, or Singularity version 3.4.1 or later
  • For older Docker versions, use nvidia-docker >= 2.0.3

When using the "cuda_multi" images, the NVIDIA HPC SDK will automatically choose among CUDA versions 11.8 or 12.4 based on your installed driver. See the NVIDIA HPC SDK User's Guide for more information on using different CUDA Toolkit versions.

Multiarch containers for Arm (aarch64) and x86_64 are available for select tags starting with version 21.7.

Running the NVIDIA HPC SDK

Please see the NVIDIA HPC SDK User's Guide for getting started with the HPC SDK.

Refer to the HPC SDK Container Guide for more information on how to use the HPC SDK containers.

For a general guide on pulling and running containers, see Pulling A Container image and Running A Container in the NGC Container User Guide.