The NGC catalog hosts containers for AI/ML, metaverse, and HPC applications and are performance-optimized, tested, and ready to deploy on GPU-powered on-prem, cloud, and edge systems.
The Merlin HugeCTR container enables you to perform data preprocessing, feature engineering, train models with HugeCTR, and then serve the trained model with Triton Inference Server.
The Merlin PyTorch container allows users to do preprocessing and feature engineering with NVTabular, and then train a deep-learning based recommender system model with PyTorch, and serve the trained model on Triton Inference Server.
CUDA is a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of the NVIDIA GPUs.
NVIDIA NeMo(Neural Modules) is an open source toolkit for conversational AI. It is built for data scientists and researchers to build new state of the art speech and NLP networks easily through API compatible building blocks that can be connected together
A docker container is built from https://github.com/NVIDIA/spark-rapids-container.
It's used on Databricks to quickly deploy a GPU accelerated Spark cluster with Spark-Radpis(https://github.com/NVIDIA/spark-rapids).
NVIDIA Omniverse(™) Replicator is an SDK for generating physically accurate 3D synthetic data, and developing custom synthetic data generation (SDG) tools.
The Merlin TensorFlow container allows users to do preprocessing and feature engineering with NVTabular, and then train a deep-learning based recommender system model with TensorFlow, and serve the trained model on Triton Inference Server.
PyTorch is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. Automatic differentiation is done with a tape-based system at the functional and neural network layer levels.
Triton Inference Server is an open source software that lets teams deploy trained AI models from any framework, from local or cloud storage and on any GPU- or CPU-based infrastructure in the cloud, data center, or embedded devices.
TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices.
NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet is a deep learning framework that allows you to mix the flavors of symbolic programming and imperative programming to maximize efficiency and productivity.
NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network and produces a highly optimized runtime engine that performs inference for that network.
PaddlePaddle is the first independent R&D deep learning platform in China. It has been widely adopted by manufacturing, agriculture, enterprise service, serving 4 million + developers, 157,000 companies and generating 476,000 models.
The Holoscan container includes the built Holoscan libraries, GXF extensions, headers, example source code, and sample datasets. It is the simplest way to run sample streaming applications or create your own application using the Holoscan SDK.
Nvidia Clara Parabricks is an accelerated compute framework that supports applications across the genomics industry, primarily supporting analytical workflows for DNA, RNA, and somatic mutation detection applications.