Linux / amd64
DeepStream 6.0 is a major release that brings substantial new features from our previous version. One significant change is the support of TensorRT 8.0.1. If you plan to bring models that were developed on previous versions of DeepStream and TAO Toolkit (formerly TLT) you need to re-calibrate the INT8 files so they are compatible with TensorRT 8.0.1 before you can use them in DeepStream 6.0. Details can be found in the Readme First section of the SDK Documentation.
NVIDIA’s DeepStream SDK delivers a complete streaming analytics toolkit for AI-based multi-sensor processing for video, image, and audio understanding. DeepStream is an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions that transform pixels and sensor data into actionable insights. DeepStream SDK features hardware-accelerated building blocks, called plugins, that bring deep neural networks and other complex processing tasks into a processing pipeline. The DeepStream SDK allows you to focus on building optimized Vision AI applications without having to design complete solutions from scratch.
The DeepStream SDK uses AI to perceive pixels and generate metadata while offering integration from the edge-to-the-cloud. The DeepStream SDK can be used to build applications across various use cases including retail analytics, patient monitoring in healthcare facilities, parking management, optical inspection, managing logistics and operations etc.
DeepStream 6.0 Features
Support for all NVIDIA Ampere GPUs
New Graph Composer development environment. Develop DeepStream applications in an intuitive drag-and-drop user interface. (Please note that Graph Composer is only pre-installed on the deepstream:6.0-devel container. More details below.)
Turnkey integration with the latest TAO Toolkit AI models. Our latest additions: action recognition, 2D body pose estimation, facial landmark estimation, emotion recognition, gaze, heart rate and gesture. Check the DeepStream documentation for a complete list of supported models.
New Preprocessor plugin: Plugin for preprocessing on the predefined ROIs.
New Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) plugins: New plugins to build Conversational AI applications. ASR and TTS plugins communicate to Triton Inference Server via gRPC. The ASR and TTS Models are part of the NVIDIA Riva SDK.
gRPC support in nvinferserver plugin: Run AI inference in TensorFlow, TensorFlow-TensorRT, PyTorch and ONNX-RT on a standalone or remote Triton Inference Server
Support for audio/video synchronization for broadcasting and web conferencing applications
NVIDIA Rivermax integration: Move data directly from Mellanox NIC to GPU memory. Optimize uncompressed video pipelines by reducing CPU workload and improving PCIe bandwidth.
Develop in Python using DeepStream Python bindings: Bindings are now available in source-code. Download them from GitHub
Edge to cloud integration using standard message brokers: DeepStream now supports REDIS in addition to Kafka, MQTT and Azure IoT
Improved IoT and manageability features: bi-directional messaging between edge and cloud, over the air model (OTA) updates, smart recording and TLS based authentication for secure messaging
Updated versions of NVIDIA Compute SDKs: Triton 21.08, TensorRT™ 8.0.1 and CUDA® 11.4
Hardware accelerated video encoding/decoding and image decoding
Over 30 reference applications in Graph Composer, C/C++, and Python to get you started. Build applications that support: Action Recognition, Pose Estimation, Automatic Speech Recognition (ASR), Text-to-Speech (TTS) and many more. We also include a complete reference app (deepstream-app) that can be setup with intuitive configuration files.
Please refer to the section below which describes the different container options offered for NVIDIA Data Center GPUs running on x86 platform
Starting with DeepStream 4.0.1 release, different container variants are being released for x86 for NVIDIA Data Center GPUs platforms to cater to different user needs. Containers are differentiated based on image tags as described below:
The DS Triton (x86) container includes a version of openssl with a known vulnerability that was discovered late in our QA process. See CVE-2021-3711 for details. This will be fixed in the next release.
Ensure these prerequisites are available on your system:
Before running the container, use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.
To run the container:
Allow external applications to connect to the host's X display:
Run the docker container (use the desired container tag in the command line below):
If using docker (recommended):
docker run --gpus '"'device=0'"' -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.0 nvcr.io/nvidia/deepstream:6.0-[CONTAINER-TAG]
If using nvidia-docker (deprecated) based on a version of docker prior to 19.03:
nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.0 nvcr.io/nvidia/deepstream:6.0-[CONTAINER-TAG]
Note that the command mounts the host's X11 display in the guest filesystem to render output videos.
Command line options explained:
-it means run in interactive mode
--gpus option makes GPUs accessible inside the container
--rm will delete the container when finished
-v is the mounting directory, and used to mount host's X11 display in the container filesystem to render output videos
Users can mount additional directories (using -v option) as required to easily access configuration files, models, and other resources. (i.e., use -v /home:/home to mount the home directory into the container filesystem.
Additionally, --cap-add SYSLOG option needs to be included to enable usage of the nvds_logger functionality inside the container
to enable RTSP out, network port needs to be mapped from container to host to enable incoming connections using the -p option in the command line; eg: -p 8554:8554
See /opt/nvidia/deepstream/deepstream-6.0/README inside the container for deepstream-app usage.
There are known bugs and limitations in the SDK. To learn more about those, refer to the release notes
Please note that to use DeepStream Python bindings in the Triton docker, Python 3.6 must be installed along with compatible versions of python3-gi, python3-dev and python3-gst-1.0. To install these we provide a script that helps with the installation. Simply run these commands once the docker is up-and-running:
$ cd /opt/nvidia/deepstream/deepstream-6.0 $ ./docker_python_setup.sh
For the DeepStream SDK containers there are two different licenses that apply based on the container used:
A copy of the license can also be found within a specific container at the location:
/opt/nvidia/deepstream/deepstream-6.0/LicenseAgreement.pdf. By pulling and using the DeepStream SDK (deepstream) container from NGC, you accept the terms and conditions of this license.
In addition, the (deepstream:6.0-devel) container includes the Vulkan Validation Layers (v1.1.123) to support the NVIDIA Graph Composer.
The software listed below is provided under the terms of GPLv3.
To obtain source code for software provided under licenses that require redistribution of source code, including the GNU General Public License (GPL) and GNU Lesser General Public License (LGPL), contact email@example.com. This offer is valid for a period of three (3) years from the date of the distribution of this product by NVIDIA CORPORATION.
DeepStream documentation containing development guide, getting started, plug-ins manual, API reference manual, migration guide, technical FAQ and release notes can be found at Getting Started with DeepStream page
If you have any questions or feedback, please refer to the discussions on DeepStream 6.0 Forums.
The DeepStream SDK is also available as a Debian package (.deb) or tar file (.tbz2) at NVIDIA Developer Zone
For more information, including blogs and webinars, see the DeepStream SDK website.
Download TAO Toolkit from NGC
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.