The NGC catalog hosts containers for the top AI and data science software, tuned, tested and optimized by NVIDIA, as well as fully tested containers for HPC applications and data analytics. NGC catalog containers provide powerful and easy-to-deploy software proven to deliver the fastest results, allowing users to build solutions from a tested framework, with complete control.
Sort: Last Modified
CUDA is a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of the NVIDIA GPUs.
NVIDIA cuQuantum Appliance
The NVIDIA cuQuantum Appliance is a highly performant multi-GPU solution for quantum circuit simulation. It contains NVIDIA’s cuStateVec and cuTensorNet libraries which optimize state vector and tensor network simulation, respectively.
NVIDIA MIG Manager For Kubernetes
Manage MIG partitions in Kubernetes with a simple label change to a node.
WarpDrive for Multi-Agent Reinforcement Learning
WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single or multiple GPUs.
Monitor GPUs in Kubernetes using NVIDIA DCGM. This is an exporter for a Prometheus monitoring solution in Kubernetes.
DOCA Base Image
DOCA enables the development of applications that deliver breakthrough networking, security, and storage performance by harnessing the power of the NVIDIA DPUs.
NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network and produces a highly optimized runtime engine that performs inference for that network.
NVIDIA MOFED Driver
Provision the NVIDIA MOFED driver using containers.
NVIDIA HPC SDK
The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries and tools essential to maximizing developer productivity and the performance and portability of HPC applications.
IP Over Infiniband (IPoIB) CNI Plugin
IP Over Infiniband (IPoIB) CNI plugin with allows user to create IPoIB child link and move it to the pod.
NVIDIA GPU Operator
Deploy and Manage NVIDIA GPU resources in Kubernetes.
Validator for NVIDIA GPU Operator
Validates NVIDIA GPU Operator components
Riva Speech Clients
Sample clients for Riva Speech Skills.
Riva Speech Skills
Riva Speech Skills is a scalable Conversational AI service platform.
NVIDIA Driver Manager For Kubernetes
Manages NVIDIA Driver upgrades in Kubernetes cluster.
The Merlin PyTorch container allows users to do preprocessing and feature engineering with NVTabular, and then train a deep-learning based recommender system model with PyTorch, and serve the trained model on Triton Inference Server.
The Merlin HugeCTR container enables you to perform data preprocessing, feature engineering, train models with HugeCTR, and then serve the trained model with Triton Inference Server.
The Merlin TensorFlow container allows users to do preprocessing and feature engineering with NVTabular, and then train a deep-learning based recommender system model with TensorFlow, and serve the trained model on Triton Inference Server.
Triton Inference Server
Triton Inference Server is an open source software that lets teams deploy trained AI models from any framework, from local or cloud storage and on any GPU- or CPU-based infrastructure in the cloud, data center, or embedded devices.
TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices.
NVIDIA Optimized Deep Learning Framework powered by Apache MXNet
NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet is a deep learning framework that allows you to mix the flavors of symbolic programming and imperative programming to maximize efficiency and productivity.
PyTorch is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. Automatic differentiation is done with a tape-based system at the functional and neural network layer levels.
Kaldi is an open-source software framework for speech processing.
PaddlePaddle is the first independent R&D deep learning platform in China. It has been widely adopted by manufacturing, agriculture, enterprise service, serving 4 million + developers, 157,000 companies and generating 476,000 models.