NGC | Catalog


For copy image paths and more information, please view on a desktop device.
Logo for DeepStream-l4t


DeepStream SDK delivers a complete streaming analytics toolkit for real-time AI based video and image understanding and multi-sensor processing. This container is for NVIDIA Jetson platform.



Latest Tag



March 1, 2023

Compressed Size

2.44 GB

Multinode Support


Multi-Arch Support


6.2-base (Latest) Scan Results

Linux / arm64

Before You Start

DeepStream 6.2 brings new features, a new compute stack that aligns with JetPack 5.1 and bug fixes. This release includes support for Ubuntu 20.04, GStreamer 1.16, CUDA 11.4, Triton 23.01 and TensorRT

If you plan to bring models that were developed on pre 6.2 versions of DeepStream and TAO Toolkit (formerly TLT) you need to re-calibrate the INT8 files so they are compatible with TensorRT before you can use them in DeepStream 6.2 Details can be found in the Readme First section of the SDK Documentation.

What is DeepStream?

NVIDIA’s DeepStream SDK delivers a complete streaming analytics toolkit for AI-based multi-sensor processing for video, image, and audio understanding. DeepStream is an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions that transform pixels and sensor data into actionable insights. DeepStream SDK features hardware-accelerated building blocks, called plugins, that bring deep neural networks and other complex processing tasks into a processing pipeline. The DeepStream SDK allows you to focus on building optimized Vision AI applications without having to design complete solutions from scratch.

The DeepStream SDK uses AI to perceive pixels and generate metadata while offering integration from the edge-to-the-cloud. The DeepStream SDK can be used to build applications across various use cases including retail analytics, patient monitoring in healthcare facilities, parking management, optical inspection, managing logistics and operations etc.

DeepStream 6.2 Features

  • Improved Graph Composer development environment. Graph Composer is now available for Windows 10 or Ubuntu 20.04 on x86 platforms. Graphs developed with Graph Composer can be deployed to x86 and Jetson devices.

  • New NvDeepSORT and NvSORT trackers

  • Automatic Speech Recognition (ASR), Text-to-Speech (TTS)

  • LIDAR support (Alpha)

  • Dewarper enhancements to support 15 new projections

  • Enable Preprocessing plugin with SGIE

  • GPU accelerated drawing for text, line, circles, and arrows using OSD plugin (alpha)

  • NVIDIA Rivermax integration:nvdsudpsink plugin optimizations for supporting Mellanox NIC for transmission and SMPTE compliance

  • Support Google protobuf encoding and decoding message to message brokers. (Kafka and REDIS)

  • Performance optimizations

  • Turnkey integration with the latest TAO Toolkit AI models. Check the DeepStream documentation for a complete list of supported models

  • Develop in Python using DeepStream Python bindings: Bindings are now available in source-code. Download them from GitHub

  • Updated versions of NVIDIA Compute SDKs: Triton 23.01, TensorRT™ and CUDA® 11.4

  • Over 35+ reference applications in Graph Composer, C/C++, and Python to get you started. Build applications that support: Action Recognition, Pose Estimation, Automatic Speech Recognition (ASR), Text-to-Speech (TTS) and many more. We also include a complete reference app (deepstream-app) that can be setup with intuitive configuration files.

For a full list of new features and changes, please refer to the Release Notes document available here.

DeepStream container for Jetson

Container support is now available for all Jetson platforms including Jetson Xavier NX, AGX Xavier and AGX Orin, Orin NX. The deepstream-l4t:6.2 family of containers are GPU accelerated and based on the NVIDIA Jetson products running on ARM64 architecture. For additional information refer “Usage of heavy TRT base dockers since DeepStream 6.1” section in NVIDIA DeepStream SDK Developer Guide.

DeepStream offers different container variants for Jetson (ARM64) platforms to cater to different user needs. Containers are differentiated based on image tags as described below:

  • Base: The DeepStream base container contains the plugins and libraries that are part of the DeepStream SDK along with dependencies such as CUDA, TensorRT, GStreamer, etc. This image is the recommended one for users that want to create docker images for their own DeepStream based applications. Please note that the base images do not contain sample apps. (deepstream-l4t:6.2-base)
  • Samples: The DeepStream samples container extends the base container to also include sample applications that are included in the DeepStream SDK along with associated config files, models, and streams. This container is ideal to understand and explore the DeepStream SDK using the provided samples. (deepstream-l4t:6.2-samples)
  • IoT: The DeepStream IoT container extends the base container to include the DeepStream test5 application along with associated configs and models. This container can be used to enable multi-stream DeepStream applications that can be integrated with the various messaging backends including Kafka, Azure IoT, REDIS, and MQTT thereby enabling IoT use cases. (deepstream-l4t:6.2-iot)
  • Deployment with Triton: The DeepStream Triton container enables running inference using Triton Inference server. With this, developers can run inference natively using TensorFlow, TensorFlow-TensorRT and ONNX-RT. Inference with Triton is supported in the reference application (deepstream-app). Note: Applications that may depend on opencv4 will not compile as opencv4 is not packaged within this container. To learn more about how to use Triton with DeepStream, refer to the Plugin guide in DeepStream 6.2 documentation (Gst-nvinferserver). (deepstream-l4t:6.2-triton)

These containers leverage the NVIDIA Container Runtime on Jetson, which is available for install as part of NVIDIA JetPack version 5.1 . The platform specific libraries and select device nodes for a particular device are mounted by the NVIDIA Container Runtime into the DeepStream container from the underlying host, thereby providing necessary dependencies (BSP Libraries) for DeepStream applications to execute within the container.

Since Jetpack 5.1, NVIDIA Container Runtime no longer mounts user level libraries like CUDA, cuDNN and TensorRT from the host. These will instead be installed inside the containers.

Running DeepStream


Ensure these prerequisites are available on your system:

  1. Jetson device running L4T BSP r35.2.1

  2. JetPack 5.1

Pull the container

Before running the container, use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.


  1. In the Pull column, click the icon to copy the docker pull command for the deepstream container.

  2. Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.

Run the container

To run the container:

  1. Allow external applications to connect to the host's X display:
xhost +
  1. Run the docker container using the nvidia-docker (use the desired container tag in the command line below):
sudo docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.2 -v /tmp/.X11-unix/:/tmp/.X11-unix
  1. Additional Installations to use all DeepStreamSDK Features within the docker container.

With DS 6.2, DeepStream docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio tracks.

Please run the below script inside the docker images to install additional packages that might be necessary to use all of the DeepStreamSDK features :


Command line options explained:

  • -it means run in interactive mode

  • --rm will delete the container when finished

  • -v is the mounting directory, and used to mount host's X11 display in the container filesystem

  • Users can mount additional directories (using -v option) as required to easily access configuration files, models, and other resources. (i.e., use -v /home:/home to mount the home directory into the container filesystem.

  • Additionally, --cap-add SYSLOG option needs to be included to enable usage of the nvds_logger functionality inside the container.

See /opt/nvidia/deepstream/deepstream-6.2/README inside the container for deepstream-app usage information. Additional argument to add to above docker command for accessing CSI Camera from Docker: -v /tmp/argus_socket:/tmp/argus_socket For USB Camera additional argument --device /dev/video


The NVIDIA Container Runtime available in JetPack 5.1. Please see the list below for limitations in the current enablement of DeepStream for Jetson containers.

Supports deployment only: The DeepStream container for Jetson is intended to be a deployment container and is not set up for building sources. Please refer “Docker Containers” Section within the DeepStream 6.2 Plugin Guide Section for instructions on how to build custom containers based on DeepStream from either Jetson device or your workstation

AMQP support is not included inside the container. Please refer “AMQP Protocol Adapter” Section within the DeepStream 6.2 Plugin Guide Section for instructions on how to install necessary dependencies for enabling AMQP if required

There are known bugs and limitations in the SDK. To learn more about those, refer to the release notes


-All Jetson containers are released under NVIDIA License Agreement

A copy of the license can also be found within a specific container at the following location: /opt/nvidia/deepstream/deepstream-6.2/LicenseAgreement.pdf. By pulling and using the DeepStream SDK (deepstream) container from NGC, you accept the terms and conditions of this license.

Please note that all container images come with the following packages installed: librdkafka, hiredis.

Technical blogs

Suggested Reading

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.