NGC Catalog
CLASSIC
Welcome Guest
Containers
CUDA Deep Learning

CUDA Deep Learning

For copy image paths and more information, please view on a desktop device.
Logo for CUDA Deep Learning
Description
CUDA is a parallel computing platform and programming model that enhances computing performance using NVIDIA GPUs. CUDA Deep Learning integrates networking and GPU-accelerated libraries like cuDNN, cuTensor, NCCL, HPC-x, and the CUDA Toolkit.
Publisher
NVIDIA
Latest Tag
25.04-cuda12.9-runtime-ubuntu24.04
Modified
May 2, 2025
Compressed Size
5.21 GB
Multinode Support
No
Multi-Arch Support
Yes
25.04-cuda12.9-runtime-ubuntu24.04 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

Linux / arm64

Sorry, your browser does not support inline SVG.

CUDA Deep Learning

CUDA, developed by NVIDIA, is a parallel computing platform and programming model for GPU. With CUDA, developers can dramatically speed up computing applications by harnessing the power of GPUs. The CUDA Toolkit includes libraries, a compiler, development tools, and the CUDA runtime needed for GPU-accelerated development.
CUDA Deep Learning image extends the CUDA images by adding networking support and additional libraries to accelerate deep learning workloads like cuDNN, cuTensor, NCCL, and HPC-x. These images are provided for use as a base layer upon which to build your own GPU-accelerated application container image.

Prerequisites

Using the CUDA DL NGC Container requires the host system to have the following installed:

  • Docker Engine
  • NVIDIA GPU Drivers
  • NVIDIA Container Toolkit

For supported versions, see the Framework Containers Support Matrix and the NVIDIA Container Toolkit Documentation.
No other installation, compilation, or dependency management is required. It is not necessary to install the NVIDIA CUDA Toolkit.
The CUDA Deep Learning NGC Container is also optimized to run on NVIDIA DGX Foundry and NVIDIA DGX SuperPOD managed by NVIDIA Base Command Platform. Please refer to the Base Command Platform User Guide to learn more.

Running Container Using Docker

To run a container, issue the appropriate command as explained in the Running A Container chapter in the NVIDIA Containers For Deep Learning Frameworks User’s Guide and specify the registry, repository, and tags. For more information about using NGC, refer to the NGC Container User Guide. A typical command to launch the container is:

docker run --gpus all -it --rm nvcri.io/nvidia/cuda-dl-base:YY.MM-cuda<xx.y>-devel-ubuntu<YY.MM>

Where:

  • YY.MM-cuda<xx.y>-devel-ubuntu<YY.MM> is the container version with "YY.MM" as the release number, "cuda<xx.y>" as CUDA version used in this container and "ubuntu<YY.MM>" as OS this container is built for.

For example:

docker run --gpus all -it --rm nvcr.io/nvidia/cuda-dl-base:24.09-cuda12.6-devel-ubuntu22.04

What Is In This Container?

For the full list of contents, see the CUDA DL Container Release Notes. The NVIDIA CUDA Deep Learning Container is optimized for use with NVIDIA GPUs, and contains the following software for GPU acceleration:

  • NVIDIA CUDA
  • NVIDIA cuTensor
  • NVIDIA cuDNN
  • NVIDIA NCCL (optimized for NVLink)
  • NVIDIA TensorRT
  • NVIDIA HPC-X

The software stack in this container has been validated for compatibility, and does not require any additional installation or compilation from the end user. This container can help accelerate your deep learning workflow from end to end.
Link to Open Source Code

Security CVEs

To review known CVEs on this image, refer to the Security Scanning tab on this page.

License

By pulling and using the container, you accept the terms and conditions of this End User License Agreement and Product-Specific Terms.