The NGC catalog hosts containers for the top AI and data science software, tuned, tested and optimized by NVIDIA, as well as fully tested containers for HPC applications and data analytics. NGC catalog containers provide powerful and easy-to-deploy software proven to deliver the fastest results, allowing users to build solutions from a tested framework, with complete control.
Sort: Last Modified
Triton Inference Server (Formerly TensorRT inference Server)
Container
Triton Inference Server is an open source software that lets teams deploy trained AI models from any framework, from local or cloud storage and on any GPU- or CPU-based infrastructure in the cloud, data center, or embedded devices.
TensorFlow
Container
TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices.
PyTorch
Container
PyTorch is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. Automatic differentiation is done with a tape-based system at the functional and neural network layer levels.
TensorRT
Container
NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network and produces a highly optimized runtime engine that performs inference for that network.
Kaldi
Container
Kaldi is an open-source software framework for speech processing.
TAO Toolkit for Conv AI
Container
Docker container with workflows implemented in PyTorch as part of the Train Adapt Optimize (TAO) Toolkit.
Merlin Tensorflow Training
Container
This container allows users to do preprocessing and feature engineering with NVTabular, and then train a deep-learning based recommender system model with TensorFlow.
Merlin Training
Container
This container allows users to do preprocessing and feature engineering with NVTabular, and then train a deep-learning based recommender system model with HugeCTR.
DCGM Exporter
Container
Monitor GPUs in Kubernetes using NVIDIA DCGM. This is an exporter for a Prometheus monitoring solution in Kubernetes.
CUDA Samples
Container
Collection of containerized CUDA Samples
NVIDIA Kubernetes Device Plugin
Container
The NVIDIA Kubernetes Device Plugin registers GPUs as compute resources in the Kubernetes cluster.
NVIDIA GPU Feature Discovery for Kubernetes
Container
Plugin for the Kubernetes Node Feature Discovery for adding GPU node labels.
NVIDIA MIG Manager For Kubernetes
Container
Manage MIG partitions in Kubernetes with a simple label change to a node.
NVIDIA GPU Operator
Container
Deploy and Manage NVIDIA GPU resources in Kubernetes.
Validator for NVIDIA GPU Operator
Container
Validates NVIDIA GPU Operator components
NVIDIA Container Toolkit
Container
Run GPU accelerated Docker containers using the NVIDIA GPU Operator
NVIDIA GPU Driver
Container
Provision the NVIDIA GPU driver using containers.
M-Star CFD
Container
M-Star CFD is a multi-physics modeling package used to simulate fluid flow, heat transfer, species transport, chemical reactions, particle transport, and rigid-body dynamics.
Merlin PyTorch Training
Container
This container allows users to do preprocessing and feature engineering with NVTabular, and then train a deep-learning based recommender system model with PyTorch.
LAMMPS
Container
Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a software application designed for molecular dynamics simulations.
Clara Train SDK
Container
Clara Train SDK is a domain optimized developer application framework that includes APIs for AI-Assisted Annotation, making any medical viewer AI capable and a TensorFlow based training framework with pre-trained models to kick start AI development with techniques like Transfer Learning, Federated Learning, and AutoML.
Kubevirt GPU Device Plugin
Container
Kubernetes Device Plugin built to be used for Kubevirt. Kubevirt is a Kubernetes based technology that provides a unified development platform where users can build, modify, and deploy applications residing in both Application Containers as well as VMs.
RAPIDS
Container
The RAPIDS suite of software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs.
NVIDIA Driver Manager For Kubernetes
Container
Manages NVIDIA Driver upgrades in Kubernetes cluster.
Merlin Inference
Container
This container allows users to deploy NVTabular workflows and HugeCTR or TensorFlow models to Triton Inference server for production.