Linux / amd64
DeepStream 6.2 brings new features, a new compute stack and bug fixes. This release includes support for Ubuntu 20.04, GStreamer 1.16, CUDA 11.8, Triton 22.09 and TensorRT 8.5.2.2. If you plan to bring models that were developed on pre 6.1.1 versions of DeepStream and TAO Toolkit (formerly TLT) you need to re-calibrate the INT8 files so they are compatible with TensorRT 8.5.2.2 before you can use them in DeepStream 6.2. Details can be found in the Readme First section of the SDK Documentation
NVIDIA’s DeepStream SDK delivers a complete streaming analytics toolkit for AI-based multi-sensor processing for video, image, and audio understanding. DeepStream is an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions that transform pixels and sensor data into actionable insights. DeepStream SDK features hardware-accelerated building blocks, called plugins, that bring deep neural networks and other complex processing tasks into a processing pipeline. The DeepStream SDK allows you to focus on building optimized Vision AI applications without having to design complete solutions from scratch.
The DeepStream SDK uses AI to perceive pixels and generate metadata while offering integration from the edge-to-the-cloud. The DeepStream SDK can be used to build applications across various use cases including retail analytics, patient monitoring in healthcare facilities, parking management, optical inspection, managing logistics and operations etc.
DeepStream 6.2 Features
NVIDIA AI Enterprise 3.0 support
Support for all NVIDIA Ampere and Hopper GPUs
New NvDeepSORT and NvSORT trackers
REST API support to control DeepStream pipeline on-the-fly (Alpha)
LIDAR support (Alpha)
Dewarper enhancements to support 15 new projections
New Gst-nvdsxfer plugin transfers data over NVLink across multiple GPU under single process for disaggregated pipelines
Enable Preprocessing plugin with SGIE
GPU accelerated drawing for text, line, circles, and arrows using OSD plugin (alpha)
NVIDIA Rivermax integration:nvdsudpsink plugin optimizations for supporting Mellanox NIC for transmission and SMPTE compliance
Support Google protobuf encoding and decoding message to message brokers. (Kafka and REDIS)
Performance optimizations
Turnkey integration with the latest TAO Toolkit AI models. Check the DeepStream documentation for a complete list of supported models.
Develop in Python using DeepStream Python bindings: Bindings are now available in source-code. Download them from GitHub
New Python reference app that shows how to use demux to multi-out video streams
Improved Graph Composer development environment. Develop DeepStream applications in an intuitive drag-and-drop user interface. (Please note that Graph Composer is only pre-installed on the deepstream:6.2-devel container. More details below.)
Updated versions of NVIDIA Compute SDKs: Triton 22.09, TensorRT™ 8.5.2.2 and CUDA® 11.8
Over 35+ reference applications in Graph Composer, C/C++, and Python to get you started. Build applications that support: Action Recognition, Pose Estimation, Automatic Speech Recognition (ASR), Text-to-Speech (TTS) and many more. We also include a complete reference app (deepstream-app) that can be setup with intuitive configuration files.
For a full list of new features and changes, please refer to the Release Notes document available here.
Please refer to the section below which describes the different container options offered for NVIDIA Data Center GPUs running on x86 platform
DeepStream offers different container variants for x86 for NVIDIA Data Center GPUs platforms to cater to different user needs. Containers are differentiated based on image tags as described below:
DeepStream dockers or dockers derived from previous releases (before DS 6.1) will need to update their Cuda GPG key to perform software updates. Please see link for details.
The DALI CVEs can be eliminated if the end user deletes the entire dali backend directory (/opt/tritonserver/backends/dali/)
The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2022-37454 for details.
The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2018-25032 for details.
The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2022-45061 for details.
The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2020-10735 for details.
The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2022-40897 for details.
The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2022-40898 for details.
The DS Container (x86: triton) includes a mailcap module with a known vulnerability that is not used by DeepStream. This module is present in the Conda env used by DALI included within the Triton docker. See CVE-2015-20107 for details. This will be fixed in the next release. Users may remove this inside their docker images with the command: rm /usr/lib/python3.8/mailcap.py
.
The DS Container (x86: triton) includes librabbitmq 0.8.0 with a known vulnerability that currently has no official patch for Ubuntu 20.04. See CVE-2019-18609 for details. This will be addressed in the next release. To avoid this completely, users may use one of the other IoT protocols supported by DeepStream -- REDIS, Kafka or Azure.
Ensure these prerequisites are available on your system:
nvidia-docker We recommend using Docker 20.10.13 along with the latest nvidia-container-toolkit as described in the installation steps. Usage of nvidia-docker2 packages in conjunction with prior docker versions are now deprecated.
NVIDIA display driver version 525.85.12.
Before running the container, use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.
Procedure:
In the Pull column, click the icon to copy the docker pull command for the deepstream container of your choice
Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.
To run the container:
xhost +
docker run --gpus all -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.2 nvcr.io/nvidia/deepstream:6.2-devel
If using nvidia-docker (deprecated) based on a version of docker prior to 19.03:
nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.2 nvcr.io/nvidia/deepstream:6.2-devel
Note that the command mounts the host's X11 display in the guest filesystem to render output videos.
With DS 6.2, DeepStream docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio tracks.
Please run the below script inside the docker images to install additional packages that might be necessary to use all of the DeepStreamSDK features :
/opt/nvidia/deepstream/deepstream/user_additional_install.sh
Command line options explained:
-it means run in interactive mode
--gpus option makes GPUs accessible inside the container. An alternative to “all” it is possible to specify a device (i.e. '"'device=0'")
--rm will delete the container when finished
--privileged grants access to the container to the host resources. This flag is need to run Graph Composer from the -devel container
-v is the mounting directory, and used to mount host's X11 display in the container filesystem to render output videos
Users can mount additional directories (using -v option) as required to easily access configuration files, models, and other resources. (i.e., use -v /home:/home to mount the home directory into the container filesystem.
Additionally, --cap-add SYSLOG option needs to be included to enable usage of the nvds_logger functionality inside the container
to enable RTSP out, network port needs to be mapped from container to host to enable incoming connections using the -p option in the command line; eg: -p 8554:8554
See /opt/nvidia/deepstream/deepstream-6.2/README inside the container for deepstream-app usage.
There are known bugs and limitations in the SDK. To learn more about those, refer to the release notes
For creating a base image using the Triton (x86) docker one approach is to use an entry point with a combined script so end users can run a specific script for their application.
ENTRYPOINT ["/bin/sh", "-c" , "/opt/nvidia/deepstream/deepstream-6.2/entrypoint.sh && <custom command>"]
For the DeepStream SDK containers there are two different licenses that apply based on the container used:
A copy of the license can also be found within a specific container at the location: /opt/nvidia/deepstream/deepstream-6.2/LicenseAgreement.pdf
. By pulling and using the DeepStream SDK (deepstream) container from NGC, you accept the terms and conditions of this license.
Please note that all container images come with the following packages installed: librdkafka, hiredis, cmake, autoconf ( license and license exception ), libtool, libglvnd-dev, libgl1-mesa-dev, libegl1-mesa-dev,libgles2-mesa-dev .
In addition, the (deepstream:6.2-devel) container includes the Vulkan Validation Layers (v1.1.123) to support the NVIDIA Graph Composer.
The software listed below is provided under the terms of GPLv3.
To obtain source code for software provided under licenses that require redistribution of source code, including the GNU General Public License (GPL) and GNU Lesser General Public License (LGPL), contact oss-requests@nvidia.com. This offer is valid for a period of three (3) years from the date of the distribution of this product by NVIDIA CORPORATION.
Component | License |
---|---|
autoconf | GPL 3.0 |
libtool | GPL 3.0 |
libglvnd-dev | GPL 3.0 |
libgl1-mesa-dev | GPL 3.0 |
libegl1-mesa-dev | GPL 3.0 |
libgles2-mesa-dev | GPL 3.0 |
Read the technical tutorial on how PeopleNet model can be trained with custom data using TAO Toolkit (earlier NVIDIA Transfer Learning Toolkit.)
Get started on how to configure and use [DeepStream trackers] (https://youtu.be/4nV-GtqggEw)
DeepStream documentation containing development guide, getting started, plug-ins manual, API reference manual, migration guide, technical FAQ and release notes can be found at Getting Started with DeepStream page
If you have any questions or feedback, please refer to the discussions on DeepStream Forums.
The DeepStream SDK is also available as a Debian package (.deb) or tar file (.tbz2) at NVIDIA Developer Zone
For more information, including blogs and webinars, see the DeepStream SDK website.
Download TAO Toolkit from NGC
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.