NVIDIA DeepStream SDK
NVIDIA’s DeepStream SDK is a multi-sensor AI-based streaming analytics toolkit for video, image, and audio understanding. DeepStream is an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions that transform real-time pixels and sensor data into actionable insights. DeepStream SDK features hardware-accelerated building blocks, called plugins, which bring deep neural networks and other complex processing tasks into a processing pipeline. The DeepStream SDK allows you to focus on building optimized Vision AI applications without having to design complete solutions from scratch.
The DeepStream SDK uses AI to perceive pixels and generate metadata while offering integration from the edge-to-the-cloud. The DeepStream SDK can be used to build applications across various use cases including retail analytics, patient monitoring in healthcare facilities, parking management, optical inspection (AOI), supply chain management (logistics), operations and manufacturing.
What is in the NVIDIA DeepStream SDK Collection
NVIDIA NGC offers collections that easily group relevant information together. We have created the NVIDIA DeepStream SDK collection, so it is easy to find all the information in a single place.
Please note that we have consolidated the DeepStream containers as follows:
Type |
Architecture |
Container Name |
Target |
---|---|---|---|
Triton |
Multi-Arch x86 + Jetson |
deepstream:7.0-triton-multiarch |
Deployment |
DeepStream Samples |
Multi-Arch x86 + Jetson |
deepstream:7.0-samples-multiarch |
Deployment |
DeepStream-ARM-SBSA |
ARM SBSA |
depstream:7.0-triton-arm-sbsa |
Deployment |
Development and Graph Composer |
x86 |
deepstream:7.0-gc-triton-devel |
Development. Includes Graph Composer GUI |
NOTE: All DeepStream dockerfiles are available on GitHub for easy customization.
For a full list of new features and changes, and known limitations please refer to the DeepStream 7.0 Release Notes.
Key New Features and Enhancements available in DeepStream 7.0:
Category |
Details |
New Features |
• New DeepStream Libraries. Unleash the power of DeepStream without GStreamer dependency with easy-to-use Python APIs. • New DeepStream Service Maker. A new abstraction layer to build vision AI applications in minutes. • DeepStream now supports WSL2 as a development environment. • DeepStream 3D Framework adds support for LIDAR and RADAR with BEVFusion and V2X models. • Introducing PipeTuner a new tool to automatically optimize DeepStream application parameters. • New Single-View 3D tracking capability available on NvTracker. • Support for ARM SBSA based servers. |
Key New Features and Enhancements available on GXF and Graph Composer 4.0
Category |
Details |
New Features |
• C++ and Python Application APIs • Event-based scheduler • New logger • Complex valued parameter types • Multi-arch support in container builder • Runtime only package for x86 |
Enhancements |
• Improved Python APIs with distributed execution support • Asynchronous UCX transmitter/receiver • More log control options • Performance enhancements in core to reduce entity creation latency and remove lock contentions • DLPack and CUDAArray support
|
Item |
Documentation |
---|---|
Documentation |
DeepStream SDK Documentation |
Getting Started |
Quick Start Guide |
Developing with C/C++ |
DeepStream Reference Application (deepstream-app) |
Developing with Python |
Python Application GitHub Repository |
Developing with Graph Composer |
Graph Composer Reference Apps |
DeepStream and TAO Toolkit Integration |
TAO Supported Models |
Deep Dives with DeepStream Ninjas |
DeepStream Multi-Object Trackers |
Additional Examples |
|
Learn More |
New To DeepStream? Start here |
Jetson Series |
Modules |
JetPack |
Latest DeepStream Supported |
---|---|---|---|
Orin |
Jetson AGX Orin, Jetson Orin NX, Jetson Orin Nano IGX (including dGPU mode) |
7.0 |
|
Xavier |
Jetson AGX Xavier, Jetson Xavier NX |
6.3 (legacy) |
|
TX2 |
Jetson TX2 NX Jetson TX2 |
6.0 (legacy) |
|
Nano |
Jetson Nano |
6.0 (legacy) |
Enterprise GPU Architecture |
GPUs |
---|---|
Ada Lovelace |
L4, L40 |
Ampere |
A2, A10, A16, A30, A40, A100, RTX A6000 |
Hopper |
H100 |
Turing |
T4 |
Software Dependencies |
x86 / ARM SBSA |
Jetson (via JetPack) |
---|---|---|
Operating System |
Ubuntu LTS 22.04 |
Ubuntu LTS 22.04 |
GStreamer |
1.20.3 |
1.20.3 |
Rivermax |
1.40 |
1.40 |
DLFW (Triton) |
23.10 |
24.03 |
TensorRT |
8.6.1.6 |
8.6.2.3 |
CUDA |
12.2 U2 |
12.2 |
cuDNN |
8.9.6 |
8.9.4.25 |
GPU Driver (RM) |
535 (535 TRD8) |
N/A |
CV-CUDA (DeepStream Libraries) |
0.5 |
N/A |
NvImageCodec (DeepStream Libraries) |
0.2 |
N/A |
PyNvVideoCodec (DeepStream Libraries) |
1.0 |
N/A |
Note:
If you are looking for older versions of DeepStream, please refer to the x86 or Jetson archive. Archived documentation is available here.
DeepStream Support is available via:
Method |
Available to |
---|---|
Forums |
|
Direct Support |
NVIDIA AI Enterprise License holders |
There are known bugs and limitations in the SDK. To learn more about those, refer to the release notes.
The following licenses apply to the DeepStream SDK assets:
Asset |
Applicable EULA |
Notes |
---|---|---|
SDK |
A copy of the license is available on the following folder of the SDK: |
|
Containers |
License grants redistribution rights allowing developers to build applications on top of the DeepStream containers |
|
Development Containers |
A development-only license. Does not allow redistribution of the container |
|
TAO Models |
All TAO pre-trained models included in the DeepStream SDK are covered by the NVIDIA AI Product License.
|
NOTE: By pulling, downloading, or using the DeepStream SDK, you accept the terms and conditions of the EULA licenses listed above:
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.