The NGC catalog hosts containers for the top AI and data science software, tuned, tested and optimized by NVIDIA, as well as fully tested containers for HPC applications and data analytics. NGC catalog containers provide powerful and easy-to-deploy software proven to deliver the fastest results, allowing users to build solutions from a tested framework, with complete control.
Sort: Last Modified
Merlin TensorFlow
Container
The Merlin TensorFlow container allows users to do preprocessing and feature engineering with NVTabular, and then train a deep-learning based recommender system model with TensorFlow, and serve the trained model on Triton Inference Server.
Riva Speech Clients
Container
Sample clients for Riva Speech Skills.
Riva Speech Skills
Container
Riva Speech Skills is a scalable Conversational AI service platform.
Merlin Inference
Container
This container allows users to deploy NVTabular workflows and HugeCTR or TensorFlow models to Triton Inference server for production.
Merlin PyTorch
Container
The Merlin PyTorch container allows users to do preprocessing and feature engineering with NVTabular, and then train a deep-learning based recommender system model with PyTorch, and serve the trained model on Triton Inference Server.
Merlin HugeCTR
Container
The Merlin HugeCTR container enables you to perform data preprocessing, feature engineering, train models with HugeCTR, and then serve the trained model with Triton Inference Server.
Triton Inference Server
Container
Triton Inference Server is an open source software that lets teams deploy trained AI models from any framework, from local or cloud storage and on any GPU- or CPU-based infrastructure in the cloud, data center, or embedded devices.
PyTorch
ContainerQuick Deploy
PyTorch is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. Automatic differentiation is done with a tape-based system at the functional and neural network layer levels.
TensorFlow
ContainerQuick Deploy
TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices.
TensorRT
Container
NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network and produces a highly optimized runtime engine that performs inference for that network.
NVIDIA Optimized Deep Learning Framework powered by Apache MXNet
Container
NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet is a deep learning framework that allows you to mix the flavors of symbolic programming and imperative programming to maximize efficiency and productivity.
PaddlePaddle
Container
PaddlePaddle is the first independent R&D deep learning platform in China. It has been widely adopted by manufacturing, agriculture, enterprise service, serving 4 million + developers, 157,000 companies and generating 476,000 models.
Kaldi
Container
Kaldi is an open-source software framework for speech processing.
NVIDIA GPU Operator
Container
Deploy and Manage NVIDIA GPU resources in Kubernetes.
Validator for NVIDIA GPU Operator
Container
Validates NVIDIA GPU Operator components
Single Cell Examples
Container
Contains example notebooks demonstrating the use of RAPIDS for GPU-accelerated analysis of single-cell sequencing data.
NVIDIA vGPU Device Manager
Container
Manages NVIDIA vGPU devices in a Kubernetes cluster
NVIDIA GPU Driver
Container
Provision NVIDIA GPU Driver as a Container.
NVIDIA Driver Manager For Kubernetes
Container
Manages NVIDIA Driver upgrades in Kubernetes cluster.
Clara AGX Metagenomics Classification
Container
Tools for taxonomic classification compiled with support for ARM and AGX hardware.
Clara AGX us4R-lite Ultrasound Container
Container
This container includes all necessary drivers and libraries to connect and run the us4R-lite Ultrasound platform on the Clara AGX Developer Kit hardware.
Clara AGX Triton Inference Server
Container
This release is Triton Inference Server built only with support for Clara AGX Hardware.
Triton Inference Server (Formerly TensorRT inference Server) simplifies the deployment of AI models at scale in production and maximizes inference performance.
Clara AGX PyTorch
Container
This release is PyTorch built only with support for Clara AGX Hardware.
PyTorch is a GPU accelerated tensor computational framework with a Python front end.
Clara AGX Metagenomics Polish
Container
Tools for genomics polishing compiled with support for ARM and AGX hardware.
Clara AGX Metagenomics Assembly
Container
Tools for genome assembly compiled with support for ARM and AGX hardware.