Linux / arm64
The core of NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network, which consists of a network definition and a set of trained parameters, and produces a highly optimized runtime engine that performs inference for that network.
The IGX TensorRT Production Branch, part of NVIDIA AI Enterprise -IGX and purpose-built for NVIDIA IGX Orin platforms, provides an API-stable branch that includes monthly fixes for high and critical software vulnerabilities. This branch provides a stable and secure environment for building your mission-critical AI applications running at the edge. The TensorRT production branch releases every six months with a three-month overlap between two releases.
Before you start, ensure that your environment is set up by following one of the deployment guides available in the NVIDIA IGX Orin Documentation.
For an overview of the features included in the TensorRT Production Branch as of October 2023, please refer to the Release Notes for TensorRT 23.08.
For TensorRT Developer and Installation Guides, see the TensorRT Product Documentation website.
Additionally, if you're looking for information on Docker containers and guidance on running a container, review the Containers For Deep Learning Frameworks User Guide.
Please review the Security Scanning tab to view the latest security scan results.
For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.
Get access to knowledge base articles and support cases. File a Ticket
Learn more about how to deploy NVIDIA AI Enterprise and access more technical information by visiting the documentation hub.
Access the NVIDIA Licensing Portal to manage your software licenses.