NGC Catalog
CLASSIC
Welcome Guest
Containers
TensorRT LTSB2 IGX

TensorRT LTSB2 IGX

For copy image paths and more information, please view on a desktop device.
Logo for TensorRT LTSB2 IGX
Associated Products
Features
Description
NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network and produces a highly optimized runtime engine that performs inference for that network.
Publisher
NVIDIA
Latest Tag
23.08-lws2.1.0-dgpu
Modified
April 9, 2025
Compressed Size
6.2 GB
Multinode Support
No
Multi-Arch Support
No
23.08-lws2.1.0-dgpu (Latest) Security Scan Results

Linux / arm64

Sorry, your browser does not support inline SVG.

What Is TensorRT?

The core of NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network, which consists of a network definition and a set of trained parameters, and produces a highly optimized runtime engine that performs inference for that network.

Running TensorRT

Before you can run an NGC deep learning framework container, your Docker environment must support NVIDIA GPUs. To run a container, issue the appropriate command as explained in the Running A Container chapter in the NVIDIA Containers And Frameworks User Guide and specify the registry, repository, and tags. For more information about using NGC, refer to the NGC Container User Guide.

Procedure

  1. Select the Tags tab and locate the container image release that you want to run.

  2. In the Pull Tag column, click the icon to copy the docker pull command.

  3. Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.

  4. Run the container image.

docker run --gpus all -it --rm -v local_dir:container_dir nvcr.io/nvaie/tensorrt:xx.xx-py3

Where:

  • -it means interactive

  • --rm will delete the container when finished

  • xx.xx is the container version. For example, 21.07.

  1. You can build and run the TensorRT C++ samples from within the image. For details on how to run each sample, see the TensorRT Developer Guide.

    cd  /workspace/tensorrt/samples
    make  -j4
    cd  /workspace/tensorrt/bin
    ./sample_mnist
    
  2. You can also execute the TensorRT Python samples.

    cd  /workspace/tensorrt/samples/python/introductory_parser_samples
    python  caffe_resnet50.py  -d  /workspace/tensorrt/python/data
    
  3. See /workspace/README.md inside the container for information on customizing your image.

Python Dependencies

In order to save space, some of the dependencies of the Python samples have not been pre-installed in the container. To install these dependencies, run the following command before you run these samples:

    /opt/tensorrt/python/python_setup.sh

Suggested Reading

For the latest TensorRT container Release Notes see the TensorRT Container Release Notes website.

For the latest TensorRT product Release Notes, Developer and Installation Guides, see the TensorRT Product Documentation website.

Security Vulnerabilities in Open Source Packages

Please review the Security Scanning tab to view the latest security scan results. For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.

Known Issues:

Collection of ftrace events may not work correctly, a newer version such as Nsight Systems 2024.5.4 from JetPack 6.1 or JetPack 5.1 could be used instead to collect ftrace events. Profiling from Nsight Systems GUI on IGX with a discrete GPU might not work, as well as connecting to such a devkit from a Ubuntu x86_64 host over SSH. In this case, please use the Nsight Systels command line (nsys) directly on the target.

License

By pulling and using the container, you accept the terms and conditions of this End User License Agreement.