This collection provides performance-optimized PyTorch and TensorFlow containers to AI practitioners for developing and deploying their solutions on any GPU-accelerated on-prem, cloud, and edge systems.
Both, PyTorch and TensorFlow containers from the NGC catalog are optimized for GPU acceleration, and contain a validated set of libraries that enable and optimize GPU performance. These containers also contains software for accelerating ETL (DALI, RAPIDS), Training (cuDNN, NCCL), and Inference (TensorRT) workloads.
NVIDIA releases a new version of these containers monthly with optimized libraries, giving users higher training and inference performance on the same GPU-powered system.
Visit PyTorch and TensorFlow pages to view detailed instructions on running the containers.
See the latest Release Notes on PyTorch and TensorFlow
For a full list of the supported software and specific versions that come packaged with this framework based on the container image, see the Frameworks Support Matrix.
By pulling and using the container, you accept the terms and conditions of this End User License Agreement.