NGC Catalog
CLASSIC
Welcome Guest
Containers
Triton Inference Server PB October 2024 (PB 24h2)

Triton Inference Server PB October 2024 (PB 24h2)

For copy image paths and more information, please view on a desktop device.
Logo for Triton Inference Server PB October 2024 (PB 24h2)
Associated Products
Features
Description
Triton Inference Server Production Branch October 2024 (PB 24h2) offers a 9-month lifecycle for API stability, with monthly patches for high and critical software vulnerabilities.
Publisher
NVIDIA
Latest Tag
24.08.08-vllm-python-py3
Modified
May 7, 2025
Compressed Size
9.29 GB
Multinode Support
No
Multi-Arch Support
Yes
24.08.08-vllm-python-py3 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

What Is Triton Inference Server?

Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Triton supports an HTTP/REST and GRPC protocol that allows remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton is available as a shared library with a C API that allows the full functionality of Triton to be included directly in an application.

Three Docker images are available:

  • The 24.08.xx-py3 image contains the Triton inference server with support for Tensorflow, PyTorch, TensorRT, ONNX and OpenVINO models.
  • The 24.08.xx-py3-sdk image contains Python and C++ client libraries, client examples, and the Model Analyzer.
  • The 24.08.xx-py3-min image is used as the base for creating custom Triton server containers as described in Customize Triton Container.

What Is Triton Inference Server Production Branch October 2024?

The Triton Inference Server Production Branch, exclusively available with NVIDIA AI Enterprise, is a 9-month supported, API-stable branch that includes monthly fixes for high and critical software vulnerabilities. This branch provides a stable and secure environment for building your mission-critical AI applications. The Triton Inference Server production branch releases every six months with a three-month overlap in between two releases.

Getting started with Triton Inference Server Production Branch

Before you start, ensure that your environment is set up by following one of the deployment guides available in the NVIDIA AI Enterprise Documentation.

For an overview of the features included in the Triton Inference Server Production Branch October, please refer to the Release Notes for Triton Inference Server 24.08.

For more information about the Triton Inference Server, see:

  • Triton Inference Server Quick Start Guide
  • Triton Inference Server User Guide

Additionally, if you're looking for information on Docker containers and guidance on running a container, review the Containers For Deep Learning Frameworks User Guide.

Compatible Infrastructure Software Versions

For the optimized performance, it is highly recommended to deploy the supported NVIDIA AI Enterprise Infrastructure software in conjunction with your AI software.

Production Branch - October 2024 (24h2) is compatible with NVIDIA AI Enterprise Infrastructure 5.

Release Notes

24.08.08-triton release - Nsight Systems
Upgrade of Nsight Systems to version will be happening in a future release, but currently using 2024.4.2.133. See Nsight Systems release documentation for details.

Known Issues

Warning: The pickle module is not secure. Only unpickle data you trust. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Never unpickle data that could have come from an untrusted source, or that could have been tampered with.

OSS License Archive

OSS License Archive contains all project-related licenses. It ensures transparency and compliance with legal requirements, providing detailed information about the terms and conditions associated with the use, modification, and distribution of this project.

Security Vulnerabilities in Open Source Packages

Please review the Security Scanning tab to view the latest security scan results.

For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.

Get Help

Enterprise Support

Get access to knowledge base articles and support cases or submit a ticket.

NVIDIA AI Enterprise Documentation

Visit the NVIDIA AI Enterprise Documentation Hub for release documentation, deployment guides and more.

NVIDIA Licensing Portal

Go to the NVIDIA Licensing Portal to manage your software licenses. licensing portal for your products. Get Your Licenses