Easy to use demo to demonstrate GPU accelerated Inference
Based on the NGC Deepstream Container (https://ngc.nvidia.com/catalog/containers/nvidia:deepstream-l4t)
Leverages Kubernetes, Helm, NGC, DeepStream
Best deployed via Helm to deploy the application
Does not require a Video Management System (VMS)
DeepStream SDK delivers a complete streaming analytics toolkit for real-time AI based video and image understanding and multi-sensor processing. DeepStream SDK features hardware-accelerated building blocks, called plugins that bring deep neural networks and other complex processing tasks into a stream processing pipeline. The SDK allows you to focus on building core deep learning networks and IP rather than designing end-to-end solutions from scratch.
More information on the DeepStream container here: (https://ngc.nvidia.com/catalog/containers/nvidia:deepstream-l4t)
If you want to use the NGC Models in Video Analytics Demo application, follow the below steps
1. helm fetch https://helm.ngc.nvidia.com/nvidia/charts/video-analytics-demo-l4t-0.1.2.tgz --untar
2. cd into the folder video-analytics-demo-l4t and update the file values.yaml
3. Go to the ngcModel section to update the NGC Model as shown in below
# Update the NGC Model used in Deepstream
ngcModel:
#NGC Model Pruned URL from NGC
getModel: ngc registry model download-version "nvidia/tao/trafficcamnet:pruned_v1.0.1"
#NGC model name
name: trafficcamnet
# Model File Name that will use in Deepstream
fileName: resnet18_trafficcamnet_pruned.etlt
# Model Config that needs to update
modelConfig: config_infer_primary_trafficcamnet.txt
#Do not update the Put Model
putModel: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/
ngcConfig:
apikey: ""
ngcorg: "nvidian"
ngcteam: "no-team"
Execute the below commands to deploy the Intelligent Video Analytics Demo with built-in Video and WebUI
helm fetch https://helm.ngc.nvidia.com/nvidia/charts/video-analytics-demo-l4t-0.1.2.tgz
helm install video-analytics-demo-l4t-0.1.2.tgz
If you want to use camera as input, Please follow the below steps to deploy Intelligent Video Analytics Demo.
1. helm fetch https://helm.ngc.nvidia.com/nvidia/charts/video-analytics-demo-l4t-0.1.2.tgz --untar
2. cd into the folder video-analytics-demo-l4t and update the file values.yaml
3. Go to the section Cameras in the values.yaml file and add the address of your IP camera. Read the comments section on how it can be added. Single or multiple cameras can be added as shown below
cameras:
camera1: rtsp://XXXX
camera2: rtsp://XXXX
4. helm install video-analytics-demo-l4t
To use the rtsp steam output, add the below configuration when installing the helm chart
cat <<EOF | tee values.yaml
service:
type: NodePort
port: 80
rtspnodePort: 31113
webuiPort: 5080
webuinodePort: 31115
EOF
Run the below command to install the video analytics demo above custom values
helm install iva --values ./values.yaml video-analytics-demo-l4t
http://IPAddress of Node:31115
NOTE:
WebUI application need at least 1.25GB storage on Jetson/ARM system
rtsp://IPAddress of Node:31113/ds-test
IPAddress of the node can be viewed by executing ifconfig on the server node
The DeepStream SDK license is available within the container at the location opt/nvidia/deepstream/deepstream-6.0/LicenseAgreement.pdf. By pulling and using the DeepStream SDK (deepstream) container in NGC, you accept the terms and conditions of this license.
For more information on DeepStream documentation containing Development guide, Plug-ins manual, API reference manual, migration guide, FAQ and release notes can be found here https://docs.nvidia.com/metropolis/index.html
If you have any questions or feedback, please refer to the discussions on DeepStream SDK Forum.
For more information, including blogs and webinars, see the DeepStream SDK website
Email: EGXSupport@nvidia.com