Linux / arm64
Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Triton supports an HTTP/REST and GRPC protocol that allows remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton is available as a shared library with a C API that allows the full functionality of Triton to be included directly in an application.
Eight Docker images are available:
The 23.08.xx-py3-igpu/-dgpu images contain the Triton inference server with support for Tensorflow, PyTorch, TensorRT, ONNX and OpenVINO models.
The 23.08.xx-py3-sdk-igpu/-dgpu images contain Python and C++ client libraries, client examples, and the Model Analyzer.
The 23.08.xx-py3-min-igpu/-dgpu images are used as the base for creating custom Triton server containers as described in Customize Triton Container.
The 23.08.xx-pyt-python-py3-dgpu image contains the Triton Inference Server with support for PyTorch and Python backends only.
The 23.08.xx-tf2-python-py3-dgpu image contains the Triton Inference Server with support for TensorFlow 2.x and Python backends only.
The IGX Triton Inference Server Production Branch, part of NVIDIA AI Enterprise -IGX and purpose-built for NVIDIA IGX Orin platforms, provides an API-stable branch that includes monthly fixes for high and critical software vulnerabilities. This branch provides a stable and secure environment for building your mission-critical AI applications running at the edge. The Triton Inference Server production branch releases every six months with a three-month overlap in between two releases.
Getting started with IGX Triton Inference Server Production Branche
Before you start, ensure that your environment is set up by following one of the deployment guides available in the NVIDIA IGX Orin Documentation.
For an overview of the features included in the Triton Inference Server Production Branch as of October 2023, please refer to the Release Notes for Triton Inference Server 23.08.
For more information about the Triton Inference Server, see:
Additionally, if you're looking for information on Docker containers and guidance on running a container, review the Containers For Deep Learning Frameworks User Guide.
What APIs are breaking?
/v2/logging: The runtime endpoints for both HTTP and gRPC Tritonserver endpoints
/v2/trace/setting: The runtime endpoints for both HTTP and gRPC Tritonserver endpoints
Why are the APIs breaking?
Due to a security vulnerability reported in the logging and trace logging runtime APIs, the tritonserver team has decided to deprecate these APIs from now into the near future. The team expects to re-enable these APIs in a more secure way some time later.
Impact and Change Guide
With both the runtime logging and trace logging endpoints disabled, users will no longer be able to update both the name and location of the server logs and trace logs once tritonserver has started. Users are still able to make use of the logging and trace logging features by using the command line options when starting Tritonserver or the C API.
Use the --log-file
option on the command line to specify the name and location of the server log you wish to write to. Use the --help
command to read more about this option.
Use the --trace-config
option on the command line to specify the name of the trace file you want to use. See more information about this here, or use the --help
command to read more about this option.
Please review the Security Scanning tab to view the latest security scan results.
For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.
Get access to knowledge base articles and support cases. File a Ticket
Learn more about how to deploy NVIDIA AI Enterprise and access more technical information by visiting the documentation hub.
Access the NVIDIA Licensing Portal to manage your software licenses.