NGC | Catalog
CatalogContainers
Containers
The NGC catalog hosts containers for AI/ML, metaverse, and HPC applications and are performance-optimized, tested, and ready to deploy on GPU-powered on-prem, cloud, and edge systems.
Sort: Last Modified
Logo for Isaac Sim
Isaac Sim
Container
NVIDIA Isaac Sim™ is a robotics simulation application framework built on NVIDIA Omniverse™.
Logo for Merlin HugeCTR
The Merlin HugeCTR container enables you to perform data preprocessing, feature engineering, train models with HugeCTR, and then serve the trained model with Triton Inference Server.
Logo for NVIDIA Kubernetes Device Plugin
The NVIDIA Kubernetes Device Plugin registers GPUs as compute resources in the Kubernetes cluster.
Logo for Omniverse Farm management services
Omniverse Farm management services
Logo for NVIDIA MLPerf Inference
MLPerf Inference containers are base containers for people interested in NVIDIA's MLPerf Inference submission results
Logo for RAPIDS CLX
RAPIDS CLX
Container
The RAPIDS suite of software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs.
Logo for RAPIDS
RAPIDS
ContainerQuick Deploy
The RAPIDS suite of software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs.
Logo for RAPIDS Core
RAPIDS Core
Container
The RAPIDS suite of software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs.
Logo for NVIDIA GPU Feature Discovery for Kubernetes
Plugin for the Kubernetes Node Feature Discovery for adding GPU node labels.
Logo for NVIDIA Container Toolkit
Build and Run GPU Accelerated Docker Containers.
Logo for MegaMolBART v0.2
Inference container for MegaMolBART model.
Logo for Riva Speech Skills
Riva Speech Skills is a scalable Conversational AI service platform.
Logo for CUDA
CUDA
Container
CUDA is a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of the NVIDIA GPUs.
Logo for Merlin TensorFlow
The Merlin TensorFlow container allows users to do preprocessing and feature engineering with NVTabular, and then train a deep-learning based recommender system model with TensorFlow, and serve the trained model on Triton Inference Server.
Logo for NeMo
NeMo
Container
NVIDIA NeMo(Neural Modules) is an open source toolkit for conversational AI. It is built for data scientists and researchers to build new state of the art speech and NLP networks easily through API compatible building blocks that can be connected together
Logo for Merlin PyTorch
The Merlin PyTorch container allows users to do preprocessing and feature engineering with NVTabular, and then train a deep-learning based recommender system model with PyTorch, and serve the trained model on Triton Inference Server.
Logo for NVIDIA MIG Manager For Kubernetes
Manage MIG partitions in Kubernetes with a simple label change to a node.
Logo for Triton Inference Server
Triton Inference Server is an open source software that lets teams deploy trained AI models from any framework, from local or cloud storage and on any GPU- or CPU-based infrastructure in the cloud, data center, or embedded devices.
Logo for TAO Toolkit
TAO Toolkit
Container
Docker containers distributed as part of the TAO Toolkit package
Logo for Morpheus
Morpheus
Container
NVIDIA Morpheus is an open AI application framework for cybersecurity developers.
Logo for PyTorch
PyTorch
ContainerQuick Deploy
PyTorch is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. Automatic differentiation is done with a tape-based system at the functional and neural network layer levels.
Logo for TensorFlow
TensorFlow
ContainerQuick Deploy
TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices.
Logo for NVIDIA Optimized Deep Learning Framework powered by Apache MXNet
NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet is a deep learning framework that allows you to mix the flavors of symbolic programming and imperative programming to maximize efficiency and productivity.
Logo for Kaldi
Kaldi
Container
Kaldi is an open-source software framework for speech processing.
Logo for TensorRT
TensorRT
Container
NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network and produces a highly optimized runtime engine that performs inference for that network.