NGC Catalog
CLASSIC
Welcome Guest
Containers
DeepStream October 2024 (PB 24h2)

DeepStream October 2024 (PB 24h2)

For copy image paths and more information, please view on a desktop device.
Logo for DeepStream October 2024 (PB 24h2)
Associated Products
Features
Description
DeepStream SDK delivers a complete streaming analytics toolkit for AI based video and image understanding and multi-sensor processing. This container is for NVIDIA Enterprise GPUs.
Publisher
Nvidia
Latest Tag
7.1.6-triton-x86
Modified
May 3, 2025
Compressed Size
9.99 GB
Multinode Support
No
Multi-Arch Support
No
7.1.6-triton-x86 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

Before You Start

This DeepStream SDK container is available as part of NVIDIA AI Enterprise.

The existing version of DeepStream under NVIDIA AI Enterprise only supports x86 + NVIDIA GPUs deployments

DeepStream NVAIE 7.1 Release_Notes

DeepStream container for Enterprise Grade GPUs

Please refer to the section below which describes the different container options offered for NVIDIA Data Center GPUs running on x86 platform.

Container Name Architecture License Type Notes
deepstream-pb24h2:7.1-triton-x86 x86 Deployment The DeepStream Triton container enables inference using Triton Inference Server. With Triton developers can run inference natively using TensorFlow, TensorFlow-TensorRT, PyTorch and ONNX-RT. Inference with Triton is supported in the reference application (deepstream-app)

Getting Started

Prerequisites:

Ensure these prerequisites are installed in your system before proceeding to the next step:

Component Details
nvidia-docker

We recommend using Docker 20.10.13 along with the latest nvidia-container-toolkit

as described in the installation steps. Usage of nvidia-docker2 packages in conjunction

with prior docker versions are now deprecated.

NVIDIA GPU Driver Use version: 535.183.06 for production deployments for Data Center GPUs
Codecs script

DeepStream dockers no longer package libraries for certain multimedia operations such as: audio data parsing, CPU decode, and CPU encode. This translates into limited functionality with MP4 files.

We provide a script to install these components. Make sure to execute the script within the container:
/opt/nvidia/deepstream/deepstream/user_additional_install.sh

Pull the container:

  1. From the top-right corner of this page, select the pull-down Get Container and copy the URL to the default container. Alternatively, click on View all tags to select a different container.

  2. Open a command prompt on your Linux compatible system and run the following command. Ensure the pull completes successfully before proceeding to the next step.

docker pull nvcr.io/nvidia/deepstream-pb24h2:7.1-triton-x86

Run the container:

  1. Allow external applications to connect to the host's X display:

xhost +

  1. Run the docker container (use the desired container tag in the command line below):
    If using docker (recommended):

For x86 Systems:

docker run --gpus all -it --rm --network=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-7.1 nvcr.io/nvidia/deepstream-pb24h2:7.1-triton-x86

Docker command line options explained:

Option Description
-it means run in interactive mode
--gpus This option makes GPUs accessible inside the container. It is also possible to specify a device (i.e. '"'device=0'")
--rm will delete the container when finished
--privileged grants access to the container to the host resources. This flag is need to run Graph Composer from the -devel container
-v Specifies the mounting directory and it can be used to mount host's X11 display in the container filesystem to render output videos. Users can mount additional directories (using -v option) as required to easily access configuration files, models, and other resources. (i.e., use -v /home:/home) to mount the home directory into the container filesystem.
--cap-add SYSLOG This option needs to be included to enable usage of the nvds_logger functionality inside the container
-p to enable RTSP out, network port needs to be mapped from container to host to enable incoming connections using the -p option in the command line; eg :-p 8554:8554

NOTES:
Please refer to /opt/nvidia/deepstream/deepstream-7.1/README inside the container for details on deepstream-app usage.

Using the Triton docker as a base image
For creating a base image using the Triton (x86) docker as a baseline, one approach is to use an entry point with a combined script so end users can run a specific script for their application.

ENTRYPOINT ["/bin/sh", "-c" , "/opt/nvidia/deepstream/deepstream-7.1/entrypoint.sh && \custom command\"]

For Triton samples, while running /opt/nvidia/deepstream/deepstream-7.1/samples/prepare_classification_test_video.sh, FFMPEG package along with additional dependent libs need to be installed using command below. For additional information please refer to section 1.4 (for codecs: DIFFERENCES WITH DEEPSTREAM 6.1 AND ABOVE) & section 1.5 for BREAKING CHANGES in Release notes.

apt-get install --reinstall libflac8 libmp3lame0 libxvidcore4 ffmpeg.

License

The following licenses apply to the DeepStream SDK assets:

Asset Applicable EULA Notes
SDK DeepStream SDK EULA A copy of the license is available on the following folder of the SDK:
/opt/nvidia/deepstream/deepstream-7.1/LicenseAgreement.pdf
Containers DeepStream NGC License License grants redistribution rights allowing developers to build applications on top of the DeepStream containers
TAO Models NVIDIA AI Product License All TAO pre-trained models included in the DeepStream SDK are covered by the NVIDIA AI Product License.

NOTE: By pulling, downloading, or using the DeepStream SDK, you accept the terms and conditions of the EULA licenses listed above.

Please note that all container images come with the following packages installed:

  • librdkafka

  • hiredis

  • cmake

  • autoconf ( license and license exception )

  • libtool

  • libglvnd-dev

  • libgl1-mesa-dev,libegl1-mesa-dev,libgles2-mesa-dev

The software listed below is provided under the terms of GPLv3.

To obtain source code for software provided under licenses that require redistribution of source code, including the GNU General Public License (GPL) and GNU Lesser General Public License (LGPL), contact oss-requests@nvidia.com. This offer is valid for a period of three (3) years from the date of the distribution of this product by NVIDIA CORPORATION.

Component License
autoconf GPL 3.0
libtool GPL 3.0
libglvnd-dev GPL 3.0
libgl1-mesa-dev GPL 3.0
libegl1-mesa-dev GPL 3.0
libgles2-mesa-dev GPL 3.0

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.