NGC | Catalog
CatalogContainersSmart Parking Detection

Smart Parking Detection

Logo for Smart Parking Detection
The Smart Parking Detection container includes the DeepStream application and the plugins for an example application of a smart parking solution.
Latest Tag
April 9, 2024
Compressed Size
1.96 GB
Multinode Support
Multi-Arch Support
5.0-20.08 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

What is Smart Parking Detection DeepStream-360d Container?

Smart parking detection container is a perception pipeline of the end-to-end reference application for managing parking garages. The perception pipeline will generate the metadata from camera feed and send it to the analytics pipeline for data analytics and visualization dashboard. This container includes the DeepStream application for perception; it receives video feed from cameras and generates insights from the pixels and sends the metadata to a data analytics application. The data analytic application is provided in the GitHub repo. the plugins for an example application of a smart parking solution.

The perception pipeline covers:

  • Receiving video inputs from nineteen 360-degree cameras
  • Dewarping the frames received from the cameras
  • Performing car detection in aisle and parking spots
  • Generating entry/exit events
  • Performing calibration to map camera coordinates to global coordinates
  • Generating metadata to send to data analytics

Running Smart Parking Detection Container


Ensure these prerequisites are available on your system:

  1. nvidia-docker
  2. NVIDIA display driver version 450.51

Pull the container

Before running the container, use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.


  1. In the Pull column, click the icon to copy the docker pull command for the deepstream_360d container.
  2. Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.

Run the container

To run the container:

  1. Allow external applications to connect to the host’s X display:
xhost +
  1. Run the docker container using the docker command
docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /root

Note that the command mounts the host’s X11 display in the guest filesystem to render output videos.

Option explained:

  • -it means run in interactive mode
  • --rm will delete the container when finished
  • -v is the mounting directory, and used to mount host’s X11 display in the container filesystem to render output videos
  • 5.0-20.08 is the tag for the image; 5.0 refers to DeepStream release and 20.08 refers to the version of the container for that release
  • user can mount additional directories (using -v option) as required containing configuration file and models for access by applications executed from within the container

See /opt/nvidia/deepstream/deepstream-5.0/README inside the container for usage information.

Interface Streaming and Batch Analytics

The DeepStream 360d app can serve as the perception layer that accepts multiple streams of 360-degree video to generate metadata and parking-related events. The events are transmitted over Kafka to a streaming and batch analytics backbone. Refer to this post for more details.

Container Contents

The deepstream_360d:5.0-20.08 container includes the 360d application binary from the DeepStream 5.0 360d release package, along with the models, configuration files, and videos. In addition, the container includes DeepStream plugins that are part of the DeepStream SDK 5.0.

The container is intended to be a deployment container and is not set up for building sources. It does not have toolchains, libraries, include files, etc. required for building source code within the container. It is recommended to build software on a host machine and then transfer the required binaries to the container.


The DeepStream 360d license is available at /opt/nvidia/deepstream/deepstream-5.0/LicenseAgreement.pdf . By pulling and using the DeepStream 360d (deepstream) container in NGC, you accept the terms and conditions of this license.

Suggested Reading

For more information, including blogs and webinars, see the DeepStream SDK website.

To run the analytic server for analyzing the metadata from this perception application, refer to DeepStream 360 degree application on GitHub.

DeepStream documentation containing development guide, getting started, plug-ins manual, API reference manual, migration guide, technical FAQ and release notes can be found at

Download Transfer learning Toolkit from NGC

If you have any questions or feedback, please refer to the discussions on DeepStream Forum.

Technical Blog

Learn how to deploy real-time intelligent video analytics apps and services using DeepStream SDK

Learn about the entire architecture for large scale multi-camera deployment used in this demo.

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.