Linux / amd64
Smart parking detection container is a perception pipeline of the end-to-end reference application for managing parking garages. The perception pipeline will generate the metadata from camera feed and send it to the analytics pipeline for data analytics and visualization dashboard. This container includes the DeepStream application for perception; it receives video feed from cameras and generates insights from the pixels and sends the metadata to a data analytics application. The data analytic application is provided in the GitHub repo. the plugins for an example application of a smart parking solution.
The perception pipeline covers:
Ensure these prerequisites are available on your system:
Before running the container, use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.
Procedure
To run the container:
xhost +
docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /root nvcr.io/nvidia/deepstream_360d:5.0-20.08
Note that the command mounts the host’s X11 display in the guest filesystem to render output videos.
Option explained:
See /opt/nvidia/deepstream/deepstream-5.0/README inside the container for usage information.
The DeepStream 360d app can serve as the perception layer that accepts multiple streams of 360-degree video to generate metadata and parking-related events. The events are transmitted over Kafka to a streaming and batch analytics backbone. Refer to this post for more details.
The deepstream_360d:5.0-20.08 container includes the 360d application binary from the DeepStream 5.0 360d release package, along with the models, configuration files, and videos. In addition, the container includes DeepStream plugins that are part of the DeepStream SDK 5.0.
The container is intended to be a deployment container and is not set up for building sources. It does not have toolchains, libraries, include files, etc. required for building source code within the container. It is recommended to build software on a host machine and then transfer the required binaries to the container.
The DeepStream 360d license is available at /opt/nvidia/deepstream/deepstream-5.0/LicenseAgreement.pdf . By pulling and using the DeepStream 360d (deepstream) container in NGC, you accept the terms and conditions of this license.
For more information, including blogs and webinars, see the DeepStream SDK website.
To run the analytic server for analyzing the metadata from this perception application, refer to DeepStream 360 degree application on GitHub.
DeepStream documentation containing development guide, getting started, plug-ins manual, API reference manual, migration guide, technical FAQ and release notes can be found at https://docs.nvidia.com/metropolis/index.html
Download Transfer learning Toolkit from NGC https://ngc.nvidia.com/catalog/containers/nvidia:tlt-streamanalytics
If you have any questions or feedback, please refer to the discussions on DeepStream Forum.
Learn how to deploy real-time intelligent video analytics apps and services using DeepStream SDK
Learn about the entire architecture for large scale multi-camera deployment used in this demo.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.