Linux / arm64
DeepStream 6.2 brings new features, a new compute stack that aligns with JetPack 5.1 and bug fixes. This release includes support for Ubuntu 20.04, GStreamer 1.16, CUDA 11.4, Triton 23.01 and TensorRT 8.5.2.2
If you plan to bring models that were developed on pre 6.2 versions of DeepStream and TAO Toolkit (formerly TLT) you need to re-calibrate the INT8 files so they are compatible with TensorRT 8.5.2.2 before you can use them in DeepStream 6.2 Details can be found in the Readme First section of the SDK Documentation.
NVIDIA’s DeepStream SDK delivers a complete streaming analytics toolkit for AI-based multi-sensor processing for video, image, and audio understanding. DeepStream is an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions that transform pixels and sensor data into actionable insights. DeepStream SDK features hardware-accelerated building blocks, called plugins, that bring deep neural networks and other complex processing tasks into a processing pipeline. The DeepStream SDK allows you to focus on building optimized Vision AI applications without having to design complete solutions from scratch.
The DeepStream SDK uses AI to perceive pixels and generate metadata while offering integration from the edge-to-the-cloud. The DeepStream SDK can be used to build applications across various use cases including retail analytics, patient monitoring in healthcare facilities, parking management, optical inspection, managing logistics and operations etc.
DeepStream 6.2 Features
Improved Graph Composer development environment. Graph Composer is now available for Windows 10 or Ubuntu 20.04 on x86 platforms. Graphs developed with Graph Composer can be deployed to x86 and Jetson devices.
.
New NvDeepSORT and NvSORT trackers
Automatic Speech Recognition (ASR), Text-to-Speech (TTS)
LIDAR support (Alpha)
Dewarper enhancements to support 15 new projections
Enable Preprocessing plugin with SGIE
GPU accelerated drawing for text, line, circles, and arrows using OSD plugin (alpha)
NVIDIA Rivermax integration:nvdsudpsink plugin optimizations for supporting Mellanox NIC for transmission and SMPTE compliance
Support Google protobuf encoding and decoding message to message brokers. (Kafka and REDIS)
Performance optimizations
Turnkey integration with the latest TAO Toolkit AI models. Check the DeepStream documentation for a complete list of supported models
Develop in Python using DeepStream Python bindings: Bindings are now available in source-code. Download them from GitHub
Updated versions of NVIDIA Compute SDKs: Triton 23.01, TensorRT™ 8.5.2.2 and CUDA® 11.4
Over 35+ reference applications in Graph Composer, C/C++, and Python to get you started. Build applications that support: Action Recognition, Pose Estimation, Automatic Speech Recognition (ASR), Text-to-Speech (TTS) and many more. We also include a complete reference app (deepstream-app) that can be setup with intuitive configuration files.
For a full list of new features and changes, please refer to the Release Notes document available here.
Container support is now available for all Jetson platforms including Jetson Xavier NX, AGX Xavier and AGX Orin, Orin NX. The deepstream-l4t:6.2 family of containers are GPU accelerated and based on the NVIDIA Jetson products running on ARM64 architecture. For additional information refer “Usage of heavy TRT base dockers since DeepStream 6.1” section in NVIDIA DeepStream SDK Developer Guide.
DeepStream offers different container variants for Jetson (ARM64) platforms to cater to different user needs. Containers are differentiated based on image tags as described below:
These containers leverage the NVIDIA Container Runtime on Jetson, which is available for install as part of NVIDIA JetPack version 5.1 . The platform specific libraries and select device nodes for a particular device are mounted by the NVIDIA Container Runtime into the DeepStream container from the underlying host, thereby providing necessary dependencies (BSP Libraries) for DeepStream applications to execute within the container.
Since Jetpack 5.1, NVIDIA Container Runtime no longer mounts user level libraries like CUDA, cuDNN and TensorRT from the host. These will instead be installed inside the containers.
Ensure these prerequisites are available on your system:
Jetson device running L4T BSP r35.2.1
JetPack 5.1
Before running the container, use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.
Procedure
In the Pull column, click the icon to copy the docker pull command for the deepstream container.
Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.
To run the container:
xhost +
sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.2 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:6.2-base
With DS 6.2, DeepStream docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio tracks.
Please run the below script inside the docker images to install additional packages that might be necessary to use all of the DeepStreamSDK features :
/opt/nvidia/deepstream/deepstream/user_additional_install.sh
Command line options explained:
-it means run in interactive mode
--rm will delete the container when finished
-v is the mounting directory, and used to mount host's X11 display in the container filesystem
Users can mount additional directories (using -v option) as required to easily access configuration files, models, and other resources. (i.e., use -v /home:/home to mount the home directory into the container filesystem.
Additionally, --cap-add SYSLOG option needs to be included to enable usage of the nvds_logger functionality inside the container.
See /opt/nvidia/deepstream/deepstream-6.2/README inside the container for deepstream-app usage information. Additional argument to add to above docker command for accessing CSI Camera from Docker: -v /tmp/argus_socket:/tmp/argus_socket For USB Camera additional argument --device /dev/video
The NVIDIA Container Runtime available in JetPack 5.1. Please see the list below for limitations in the current enablement of DeepStream for Jetson containers.
Supports deployment only: The DeepStream container for Jetson is intended to be a deployment container and is not set up for building sources. Please refer “Docker Containers” Section within the DeepStream 6.2 Plugin Guide Section for instructions on how to build custom containers based on DeepStream from either Jetson device or your workstation
AMQP support is not included inside the container. Please refer “AMQP Protocol Adapter” Section within the DeepStream 6.2 Plugin Guide Section for instructions on how to install necessary dependencies for enabling AMQP if required
There are known bugs and limitations in the SDK. To learn more about those, refer to the release notes
-All Jetson containers are released under NVIDIA License Agreement
A copy of the license can also be found within a specific container at the following location: /opt/nvidia/deepstream/deepstream-6.2/LicenseAgreement.pdf
. By pulling and using the DeepStream SDK (deepstream) container from NGC, you accept the terms and conditions of this license.
Please note that all container images come with the following packages installed: librdkafka, hiredis.
DeepStream documentation containing development guide, getting started, plug-ins manual, API reference manual, migration guide, technical FAQ and release notes can be found at Getting Started with DeepStream page
If you have any questions or feedback, please refer to the discussions on DeepStream Forums.
The DeepStream SDK is also available as a Debian package (.deb) or tar file (.tbz2) at NVIDIA Developer Zone
For more information, including blogs and webinars, see the DeepStream SDK website.
Download TAO Toolkit from NGC
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.