This repository contains configuration files to run a reference end-to-end video analytics application using NVIDIA DeepStream. The application is an example Traffic Cam Analyzer that locates 4 different objects on the road (car, pedestrian, roadsign and bicycle) and then classifies the cars into 6 different classes - sedan, minivan, truck etc.
We use TrafficCamNet for object detection and VehicleTypeNet for classification, both of which are pre-trained models available in the models section on NGC.
The models are configured using NVIDIA DeepStream and served using Triton Inference Server. A variant of the DeepStream container on NGC comes enabled with Triton Inference Server built-in and can be obtained by locating the appropriate tag.
The deployment is first done on a single 2g.10gb MIG instance of a NVIDIA A100 GPU then scaled all the way up to 8 x A100's, all configured with the same MIG profile to showcase how Multi Instance GPU (MIG) can be leveraged to serve IVA use-cases in parallel - either scaling a single use-case, or deploying different use-cases on a single GPU. The 2g.10gb MIG profile is optimal for this specific use-case, but you may feel free to configure your MIG slices appropriately for your use-case.
Watch a live demonstration of this example at GTC 2021!
Launch the docker container
docker run -it --rm --gpus device=<MIG-instance-UUID> nvcr.io/nvidia/deepstream:5.1-21.02-triton
Download this Resource from NGC - Use the WGET Command Up Above or the NGC CLI command
Execute the automate script
cd ds_triton && bash automate_script.sh
On a 2g.10gb MIG instance, this application would run at 30 frames per second for 35 full HD video streams.