Linux / arm64
DeepStream 6.0 is a major release that brings substantial new features from our previous version. One significant change is the support of TensorRT 8.0.1. If you plan to bring models that were developed on previous versions of DeepStream and TAO Toolkit (formerly TLT) you need to re-calibrate the INT8 files so they are compatible with TensorRT 8.0.1 before you can use them in DeepStream 6.0. Details can be found in the Readme First section of the SDK Documentation.
NVIDIA’s DeepStream SDK delivers a complete streaming analytics toolkit for AI-based multi-sensor processing for video, image, and audio understanding. DeepStream is an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions that transform pixels and sensor data into actionable insights. DeepStream SDK features hardware-accelerated building blocks, called plugins, that bring deep neural networks and other complex processing tasks into a processing pipeline. The DeepStream SDK allows you to focus on building optimized Vision AI applications without having to design complete solutions from scratch.
The DeepStream SDK uses AI to perceive pixels and generate metadata while offering integration from the edge-to-the-cloud. The DeepStream SDK can be used to build applications across various use cases including retail analytics, patient monitoring in healthcare facilities, parking management, optical inspection, managing logistics and operations etc. DeepStream 6.0 Features
New Graph Composer development environment. Please note the Graph Composer development environment can work only on x86 systems but the graphs can be deployed on Jetson as well.
Turnkey integration with the latest TAO Toolkit AI models. Our latest additions: action recognition, 2D body pose estimation, facial landmark estimation, emotion recognition, gaze, heart rate and gesture. Check the DeepStream documentation for a complete list of supported models.
New Preprocessor plugin: Plugin for preprocessing on the predefined ROIs.
gRPC support in nvinferserver plugin: Run AI inference in TensorFlow, TensorFlow-TensorRT, and ONNX-RT on a standalone or remote Triton Inference Server
Support for audio/video synchronization for broadcasting and web conferencing applications
NVIDIA Rivermax integration: Move data directly from Mellanox NIC to GPU memory. Optimize uncompressed video pipelines by reducing CPU workload and improving PCIe bandwidth.
Develop in Python using DeepStream Python bindings: Bindings are now available in source-code. Download them from GitHub
Edge to cloud integration using standard message brokers: DeepStream now supports REDIS in addition to Kafka, MQTT and Azure IoT
Improved IoT and manageability features: bi-directional messaging between edge and cloud, over the air model (OTA) updates, smart recording and TLS based authentication for secure messaging
Updated versions of NVIDIA Compute SDKs: Triton 21.08, TensorRT™ 8.0.1 and CUDA® 10.2
Hardware accelerated video encoding/decoding and image decoding
Over 30 reference applications in Graph Composer, C/C++, and Python to get you started. Build applications that support: Action Recognition, Pose Estimation and many more. We also include a complete reference app (deepstream-app) that can be setup with intuitive configuration files.
Container support is now available for all Jetson platforms including Jetson Nano, TX1, TX2 NX, Xavier NX and AGX Xavier. The deepstream-l4t:6.0 family of containers are GPU accelerated and based on the NVIDIA Jetson products running on ARM64 architecture.
Starting with DeepStream 4.0.1 release, different container variants are being released for Jetson (ARM64) platforms to cater to different user needs. Containers are differentiated based on image tags as described below:
These containers leverage the NVIDIA Container Runtime on Jetson, which is available for install as part of NVIDIA JetPack version 4.6 . The platform specific libraries and select device nodes for a particular device are mounted by the NVIDIA Container Runtime into the DeepStream container from the underlying host, thereby providing necessary dependencies for DeepStream applications to execute within the container.
Similarly, CUDA and TensorRT are ready to use within the DeepStream container as they are made available from the host by the NVIDIA Container Runtime.
Ensure these prerequisites are available on your system:
Before running the container, use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.
To run the container:
Allow external applications to connect to the host's X display:
Run the docker container using the nvidia-docker (use the desired container tag in the command line below):
sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.0 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:6.0-[CONTAINER-TAG]
Command line options explained:
-it means run in interactive mode
--rm will delete the container when finished
-v is the mounting directory, and used to mount host's X11 display in the container filesystem
Users can mount additional directories (using -v option) as required to easily access configuration files, models, and other resources. (i.e., use -v /home:/home to mount the home directory into the container filesystem.
Additionally, --cap-add SYSLOG option needs to be included to enable usage of the nvds_logger functionality inside the container.
See /opt/nvidia/deepstream/deepstream-6.0/README inside the container for deepstream-app usage information. Additional argument to add to above docker command for accessing CSI Camera from Docker: -v /tmp/argus_socket:/tmp/argus_socket For USB Camera additional argument --device /dev/video
The NVIDIA Container Runtime available in JetPack 4.6. Please see the list below for limitations in the current enablement of DeepStream for Jetson containers.
Supports deployment only: The DeepStream container for Jetson is intended to be a deployment container and is not set up for building sources except for the Triton docker. Please refer “Docker Containers” Section within the DeepStream 6.0 Plugin Guide Section for instructions on how to build custom containers based on DeepStream from either Jetson device or your workstation
AMQP support is not included inside the container. Please refer “AMQP Protocol Adapter” Section within the DeepStream 6.0 Plugin Guide Section for instructions on how to install necessary dependencies for enabling AMQP if required
There are known bugs and limitations in the SDK. To learn more about those, refer to the release notes
-All Jetson containers are released under NVIDIA License Agreement
A copy of the license can also be found within a specific container at the following location:
/opt/nvidia/deepstream/deepstream-6.0/LicenseAgreement.pdf. By pulling and using the DeepStream SDK (deepstream) container from NGC, you accept the terms and conditions of this license.
DeepStream documentation containing development guide, getting started, plug-ins manual, API reference manual, migration guide, technical FAQ and release notes can be found at Getting Started with DeepStream page
If you have any questions or feedback, please refer to the discussions on DeepStream 6.0 Forums.
The DeepStream SDK is also available as a Debian package (.deb) or tar file (.tbz2) at NVIDIA Developer Zone
For more information, including blogs and webinars, see the DeepStream SDK website.
Download TAO Toolkit from NGC
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.