NGC | Catalog

DeepStream

For copy image paths and more information, please view on a desktop device.
Logo for DeepStream

Description

DeepStream SDK delivers a complete streaming analytics toolkit for AI based video and image understanding and multi-sensor processing. This container is for data center GPUs such as NVIDIA T4 running on x86 platform.

Publisher

NVIDIA

Latest Tag

6.2-triton

Modified

March 1, 2023

Compressed Size

12.12 GB

Multinode Support

No

Multi-Arch Support

No

6.2-triton (Latest) Scan Results

Linux / amd64

Before You Start

DeepStream 6.2 brings new features, a new compute stack and bug fixes. This release includes support for Ubuntu 20.04, GStreamer 1.16, CUDA 11.8, Triton 22.09 and TensorRT 8.5.2.2. If you plan to bring models that were developed on pre 6.1.1 versions of DeepStream and TAO Toolkit (formerly TLT) you need to re-calibrate the INT8 files so they are compatible with TensorRT 8.5.2.2 before you can use them in DeepStream 6.2. Details can be found in the Readme First section of the SDK Documentation

What is DeepStream?

NVIDIA’s DeepStream SDK delivers a complete streaming analytics toolkit for AI-based multi-sensor processing for video, image, and audio understanding. DeepStream is an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions that transform pixels and sensor data into actionable insights. DeepStream SDK features hardware-accelerated building blocks, called plugins, that bring deep neural networks and other complex processing tasks into a processing pipeline. The DeepStream SDK allows you to focus on building optimized Vision AI applications without having to design complete solutions from scratch.

The DeepStream SDK uses AI to perceive pixels and generate metadata while offering integration from the edge-to-the-cloud. The DeepStream SDK can be used to build applications across various use cases including retail analytics, patient monitoring in healthcare facilities, parking management, optical inspection, managing logistics and operations etc.

DeepStream 6.2 Features

  • NVIDIA AI Enterprise 3.0 support

  • Support for all NVIDIA Ampere and Hopper GPUs

  • New NvDeepSORT and NvSORT trackers

  • REST API support to control DeepStream pipeline on-the-fly (Alpha)

  • LIDAR support (Alpha)

  • Dewarper enhancements to support 15 new projections

  • New Gst-nvdsxfer plugin transfers data over NVLink across multiple GPU under single process for disaggregated pipelines

  • Enable Preprocessing plugin with SGIE

  • GPU accelerated drawing for text, line, circles, and arrows using OSD plugin (alpha)

  • NVIDIA Rivermax integration:nvdsudpsink plugin optimizations for supporting Mellanox NIC for transmission and SMPTE compliance

  • Support Google protobuf encoding and decoding message to message brokers. (Kafka and REDIS)

  • Performance optimizations

  • Turnkey integration with the latest TAO Toolkit AI models. Check the DeepStream documentation for a complete list of supported models.

  • Develop in Python using DeepStream Python bindings: Bindings are now available in source-code. Download them from GitHub

  • New Python reference app that shows how to use demux to multi-out video streams

  • Improved Graph Composer development environment. Develop DeepStream applications in an intuitive drag-and-drop user interface. (Please note that Graph Composer is only pre-installed on the deepstream:6.2-devel container. More details below.)

  • Updated versions of NVIDIA Compute SDKs: Triton 22.09, TensorRT™ 8.5.2.2 and CUDA® 11.8

  • Over 35+ reference applications in Graph Composer, C/C++, and Python to get you started. Build applications that support: Action Recognition, Pose Estimation, Automatic Speech Recognition (ASR), Text-to-Speech (TTS) and many more. We also include a complete reference app (deepstream-app) that can be setup with intuitive configuration files.

For a full list of new features and changes, please refer to the Release Notes document available here.

DeepStream container for x86 :T4, A100, A30, A10, A2, Hopper and RTX GPUs

Please refer to the section below which describes the different container options offered for NVIDIA Data Center GPUs running on x86 platform

DeepStream offers different container variants for x86 for NVIDIA Data Center GPUs platforms to cater to different user needs. Containers are differentiated based on image tags as described below:

  • Development: This is the default tag of the container. The DeepStream development container is the recommended container to get you started as it includes Graph Composer, the build toolchains, development libraries and packages necessary for building deepstream reference applications from within the container. This container is slightly larger in size by virtue of including the build dependencies. (deepstream:6.2-devel)
  • Base: The DeepStream base container contains the plugins and libraries that are part of the DeepStream SDK along with dependencies such as CUDA, TensorRT, GStreamer, etc. This image is the recommended one for users that want to create docker images for their own DeepStream based applications. Please note that the base images do not contain sample apps or Graph Composer. (deepstream:6.2-base)
  • Samples: The DeepStream samples container extends the base container to also include sample applications that are included in the DeepStream SDK along with associated config files, models, and streams. This container is ideal to understand and explore the DeepStream SDK using the provided samples. Please note that Graph Composer is not included in this container. (deepstream:6.2-samples)
  • IoT :The DeepStream IoT container extends the base container to include the DeepStream test5 application along with associated configs and models. This container can be used to enable multi-stream DeepStream applications that can be integrated with the various messaging backends including Kafka, Azure IoT, REDIS, and MQTT thereby enabling IoT use cases. Please note that Graph Composer is not included in this container. (deepstream:6.2-iot)
  • Deployment with Triton: The DeepStream Triton container enables running inference using Triton Inference Server. With this, developers can run inference natively using TensorFlow, TensorFlow-TensorRT, PyTorch and ONNX-RT. Inference with Triton is supported in the reference application (deepstream-app). To learn more about how to use Triton with DeepStream, refer to the Plugin guide in DeepStream 6.2 documentation (Gst-nvinferserver). This container is the biggest in size because it combines multiple containers. Please note that Graph Composer is not included in this container. (deepstream:6.2-triton)

Dockers from previous release Cuda update

DeepStream dockers or dockers derived from previous releases (before DS 6.1) will need to update their Cuda GPG key to perform software updates. Please see link for details.

Known Issues

The DALI CVEs can be eliminated if the end user deletes the entire dali backend directory (/opt/tritonserver/backends/dali/)

The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2022-37454 for details.

The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2018-25032 for details.

The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2022-45061 for details.

The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2020-10735 for details.

The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2022-40897 for details.

The DS Container (x86: triton) includes DALI with a known vulnerability. This is inherited from the x86 Triton base container nvcr.io/nvidia/tritonserver:22.09-py3 . See CVE-2022-40898 for details.

The DS Container (x86: triton) includes a mailcap module with a known vulnerability that is not used by DeepStream. This module is present in the Conda env used by DALI included within the Triton docker. See CVE-2015-20107 for details. This will be fixed in the next release. Users may remove this inside their docker images with the command: rm /usr/lib/python3.8/mailcap.py.

The DS Container (x86: triton) includes librabbitmq 0.8.0 with a known vulnerability that currently has no official patch for Ubuntu 20.04. See CVE-2019-18609 for details. This will be addressed in the next release. To avoid this completely, users may use one of the other IoT protocols supported by DeepStream -- REDIS, Kafka or Azure.

Running DeepStream

Prerequisites

Ensure these prerequisites are available on your system:

  1. nvidia-docker We recommend using Docker 20.10.13 along with the latest nvidia-container-toolkit as described in the installation steps. Usage of nvidia-docker2 packages in conjunction with prior docker versions are now deprecated.

  2. NVIDIA display driver version 525.85.12.

Pull the container

Before running the container, use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.

Procedure:

  1. In the Pull column, click the icon to copy the docker pull command for the deepstream container of your choice

  2. Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.

Run the container

To run the container:

  1. Allow external applications to connect to the host's X display:
xhost +
  1. Run the docker container (use the desired container tag in the command line below):
    If using docker (recommended):
docker run --gpus all -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.2 nvcr.io/nvidia/deepstream:6.2-devel

If using nvidia-docker (deprecated) based on a version of docker prior to 19.03:

nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.2  nvcr.io/nvidia/deepstream:6.2-devel

Note that the command mounts the host's X11 display in the guest filesystem to render output videos.

  1. Additional Installations to use all DeepStreamSDK Features within the docker container.

With DS 6.2, DeepStream docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio tracks.

Please run the below script inside the docker images to install additional packages that might be necessary to use all of the DeepStreamSDK features :

/opt/nvidia/deepstream/deepstream/user_additional_install.sh

Command line options explained:

-it means run in interactive mode

  • --gpus option makes GPUs accessible inside the container. An alternative to “all” it is possible to specify a device (i.e. '"'device=0'")

  • --rm will delete the container when finished

  • --privileged grants access to the container to the host resources. This flag is need to run Graph Composer from the -devel container

  • -v is the mounting directory, and used to mount host's X11 display in the container filesystem to render output videos

  • Users can mount additional directories (using -v option) as required to easily access configuration files, models, and other resources. (i.e., use -v /home:/home to mount the home directory into the container filesystem.

  • Additionally, --cap-add SYSLOG option needs to be included to enable usage of the nvds_logger functionality inside the container

  • to enable RTSP out, network port needs to be mapped from container to host to enable incoming connections using the -p option in the command line; eg: -p 8554:8554

See /opt/nvidia/deepstream/deepstream-6.2/README inside the container for deepstream-app usage.

Limitations

There are known bugs and limitations in the SDK. To learn more about those, refer to the release notes

Using the Triton docker as a base image

For creating a base image using the Triton (x86) docker one approach is to use an entry point with a combined script so end users can run a specific script for their application.

ENTRYPOINT ["/bin/sh", "-c" , "/opt/nvidia/deepstream/deepstream-6.2/entrypoint.sh && <custom command>"]

License

For the DeepStream SDK containers there are two different licenses that apply based on the container used:

A copy of the license can also be found within a specific container at the location: /opt/nvidia/deepstream/deepstream-6.2/LicenseAgreement.pdf. By pulling and using the DeepStream SDK (deepstream) container from NGC, you accept the terms and conditions of this license.

Please note that all container images come with the following packages installed: librdkafka, hiredis, cmake, autoconf ( license and license exception ), libtool, libglvnd-dev, libgl1-mesa-dev, libegl1-mesa-dev,libgles2-mesa-dev .

In addition, the (deepstream:6.2-devel) container includes the Vulkan Validation Layers (v1.1.123) to support the NVIDIA Graph Composer.

The software listed below is provided under the terms of GPLv3.

To obtain source code for software provided under licenses that require redistribution of source code, including the GNU General Public License (GPL) and GNU Lesser General Public License (LGPL), contact oss-requests@nvidia.com. This offer is valid for a period of three (3) years from the date of the distribution of this product by NVIDIA CORPORATION.

Component License
autoconf GPL 3.0
libtool GPL 3.0
libglvnd-dev GPL 3.0
libgl1-mesa-dev GPL 3.0
libegl1-mesa-dev GPL 3.0
libgles2-mesa-dev GPL 3.0

Technical Resouces

Suggested Reading

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.