What Is Triton Inference Server?
Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Triton supports an HTTP/REST and GRPC protocol that allows remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton is available as a shared library with a C API that allows the full functionality of Triton to be included directly in an application.
Five Docker images are available:
The 23.08.xx-py3 image contains the Triton inference server with support for Tensorflow, PyTorch, TensorRT, ONNX and OpenVINO models.
The 23.08.xx-py3-sdk image contains Python and C++ client libraries, client examples, and the Model Analyzer.
The 23.08.xx-py3-min image is used as the base for creating custom Triton server containers as described in Customize Triton Container.
The 23.08.xx-pyt-python-py3 image contains the Triton Inference Server with support for PyTorch and Python backends only.
The 23.08.xx-tf2-python-py3 image contains the Triton Inference Server with support for TensorFlow 2.x and Python backends only.
What Is Triton Inference Server Production Branch October 2023?
The Triton Inference Server Production Branch, exclusively available with NVIDIA AI Enterprise, is a 9-month supported, API-stable branch that includes monthly fixes for high and critical software vulnerabilities. This branch provides a stable and secure environment for building your mission-critical AI applications. The Triton Inference Server production branch releases every six months with a three-month overlap in between two releases.
Getting started with Triton Inference Server Production Branch
Before you start, ensure that your environment is set up by following one of the deployment guides available in the NVIDIA AI Enterprise Documentation.
For an overview of the features included in the Triton Inference Server Production Branch as of October 2023, please refer to the Release Notes for Triton Inference Server 23.08.
For more information about the Triton Inference Server, see:
Additionally, if you're looking for information on Docker containers and guidance on running a container, review the Containers For Deep Learning Frameworks User Guide.
Compatible Infrastructure Software Versions
For the optimized performance, it is highly recommended to deploy the supported NVIDIA AI Enterprise Infrastructure software in conjunction with your AI software.
Security Vulnerabilities in Open Source Packages
Please review the Security Scanning tab to view the latest security scan results.
For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.
Get access to knowledge base articles and support cases or submit a ticket.
NVIDIA AI Enterprise Documentation
Visit the NVIDIA AI Enterprise Documentation Hub for release documentation, deployment guides and more.