NGC Catalog
CLASSIC
Welcome Guest
Models
TrafficCamNet

TrafficCamNet

For downloads and more information, please view on a desktop device.
Logo for TrafficCamNet
Features
Description
4 class object detection network to detect cars in an image.
Publisher
NVIDIA
Latest Version
pruned_onnx_v1.0.4
Modified
March 17, 2025
Size
5.13 MB

TrafficCamNet Model Card

Model Overview

TrafficCamNet detects cars, persons, road signs and two-wheelers in an image. This model is ready for commercial use.

References

Citations

  • Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: CVPR. (2016)
  • Erhan, D., Szegedy, C., Toshev, A., Anguelov, D.: Scalable object detection using deep neural networks, In: CVPR. (2014)
  • He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. In: CVPR (2015)

Using TAO Pre-trained Models

  • Get TAO Container
  • Get other purpose-built models from the NGC model registry:
    • TrafficCamNet
    • PeopleNet
    • PeopleNet
    • PeopleNet-Transformer
    • DashCamNet

Model Architecture

Architecture Type: Convolution Neural Network (CNN)
Network Architecture: DetectNet_v2 + ResNet18 (Feature Extractor)

The model is based on NVIDIA DetectNet_v2 detector with ResNet18 as a feature extractor. This architecture, also known as GridBox object detection, uses bounding-box regression on a uniform grid on the input image. Gridbox system divides an input image into a grid which predicts four normalized bounding-box parameters (xc, yc, w, h) and confidence value per output class. The raw normalized bounding-box and confidence detections needs to be post-processed by a clustering algorithm such as DBSCAN or NMS to produce final bounding-box coordinates and category labels.

Input:

Input Type(s): Images
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: 4D
Other Properties Related to Input: RGB Resolution: 960 X 544 X 3 (W x H x C) Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (544), W = Width of the images (960) Input scale: 1/255.0 Mean subtraction: None; No minimum bit depth, alpha, or gamma.

Input image

Output:

Output Type(s): Label(s), Bounding-Box(es), Confidence Scores
Output Format: Label: Text String(s); Bounding Box: (x-coordinate, y-coordinate, width, height), Confidence Scores: Floating Point
Other Properties Related to Output: Category Label(s): persons, road signs, two-wheelers, and vehicles; Bounding Box Coordinate(s); Confidence Score(s)

Output image

Software Integration:

Runtime Engine(s):

  • TAO 5.1
  • DeepStream 6.1 or later

Supported Hardware Architecture Compatibility:

  • Ampere
  • Jetson
  • Hopper
  • Lovelace
  • Pascal
  • Turing

Preferred Operating System(s):

  • Linux
  • Linux 4 Tegra

Model versions

  • unpruned_v1.0 - ResNet18 based pre-trained model.
  • trainable_v1.0 - ResNet18 deployment models. Contains common INT8 calibration cache for GPU and DLA.

Training

  • Data Collection Method by dataset
    • Automatic/Sensors
  • Labeling Method by dataset:
    • Human

The majority of the training dataset was collected and labeled in-house from images from a variety of dashcams and the remaining were taken from traffic cameras in a city in the US.

Training Data Properties

Two proprietary, internal datasets labeled in-house were used.

  • One dataset has more than 160,000 images containing 3 million cars. Most of the training dataset was collected from several traffic cameras in a city in the US.
  • The second dataset consists of 40,000 images of cars from a variety of dashcam to help with generalization and discrimination across classes.
Object Distribution
Environment Images Cars Persons Road Signs Two-Wheelers
Dashcam (5ft height) 40,000 1.7M 720,000 354,127 54,000
Traffic-signal content 160,000 1.1M 53500 184000 11000
Total 20,000 2.8M 773,500 538,127 65,000
Training Data Ground-truth Labeling Guidelines

Training Data Ground-truth Labeling Guidelines

  • All objects that fall under one of the four classes (car, person two-wheeler, road_sign) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.
  • Occlusion: For partially occluded objects that are visible approximately 60% or are marked as visible objects with bounding box around visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.
  • Truncation: For an object that is at the edge of the frame with visibility of 60% or more visible are marked with the truncation flag for the object.
  • Each frame is not required to have an object.

Performance

Evaluation Data Properties

  • Data Collection Method by dataset
    • Automatic/Sensors
  • Labeling Method by dataset:
    • Human

19,000 internal proprietary images identical in character to those from the training datasets referenced above.

Methodology and KPI

The true positives, false positives, false negatives are calculated using intersection-over-union (IOU) criterion greater than 0.5. The KPI for the evaluation data are reported in the table below. Model is evaluated based on precision, recall and accuracy.

The intended use of this model is to detect cars and with that in mind, key performance indicators (KPI) are calculated for car class only. The other classes - road signs, two-wheelers and persons are not factored in the model evaluation.

Model TrafficcamNet
Content Precision Recall Accuracy
TrafficcamNet 92.65 89.95 83.9

Real-time Inference Performance

The inference is run on the provided pruned model at INT8 precision. On Jetson Nano, FP16 precision is used. The inference performance is run using trtexec on Jetson Nano, AGX Xavier, Xavier NX and NVIDIA T4 GPU. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.

alt text

How to use this model

These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.

The primary use case intended for these models is detecting people in a color (RGB) image. The model can be used to detect people from photos and videos by using appropriate video or image decoding and pre-processing. As a secondary use case the model can also be used to detect bags and faces from images or videos. However, these additional classes are not the main intended use for these models.

There are two flavors of these models:

  • unpruned
  • pruned

The unpruned model is intended for training using TAO Toolkit and the user's own dataset. This can provide high fidelity models that are adapted to the use case. The Jupyter notebook available as a part of TAO container can be used to re-train.

The pruned model is intended for efficient deployment on the edge using DeepStream SDK or TensorRT. This model accepts 960x544x3 dimension input tensors and outputs 60x34x16 bbox coordinate tensor and 60x34x4 class confidence tensor. DeepStream provides a toolkit to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference. DeepStream will then post-process the output bbox coordinate tensor and class confidence tensors with NMS or DBScan clustering algorithm to create appropriate bounding boxes. The sample application and config file to run this model are provided in DeepStream SDK.

The unpruned andpruned models are encrypted and will only operate with the following key:

  • Model load key: tlt_encode

Please make sure to use this as the key for all TAO commands that require a model load key.

Instructions to use unpruned model with TAO

In order, to use this model as a pretrained weights for transfer learning, please use the below mentioned snippet as template for the model_config component of the experiment spec file to train a DetectNet_v2 model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

model_config {
  num_layers: 18
  pretrained_model_file: "/path/to/the/model.tlt"
  use_batch_norm: true
  objective_set {
    bbox {
      scale: 35.0
      offset: 0.5
    }
    cov {
    }
  }
  training_precision {
    backend_floatx: FLOAT32
  }
  arch: "resnet"
  all_projections: true
}

Instructions to deploy these models with DeepStream

In order, to use this model as a pretrained weights for transfer learning, please use the below mentioned snippet as template for the model_config component of the experiment spec file to train a DetectNet_v2 model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

  model_config {
    num_layers: 18
    pretrained_model_file: "/path/to/the/model.tlt"
    use_batch_norm: true
    objective_set {
      bbox {
        scale: 35.0
        offset: 0.5
      }
      cov {
      }
    }
    training_precision {
      backend_floatx: FLOAT32
    }
    arch: "resnet"
    all_projections: true
  }

Instructions to deploy this model with DeepStream

To create the entire end-to-end video analytics application, deploy this model with DeepStream SDK. DeepStream SDK is a streaming analytics toolkit to accelerate deployment of AI-based video analytics applications. The pruned model included here can be integrated directly into deepstream by following the instructions mentioned below.

  1. Run the default deepstream-app included in the DeepStream docker, by simply executing the commands below.

      ## Download Model:
    
      mkdir -p $HOME/trafficcamnet && \
      wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/trafficcamnet/versions/pruned_v1.0/files/resnet18_trafficcamnet_pruned.etlt \
      -O $HOME/trafficcamnet/resnet18_trafficcamnet_pruned.etlt && \
      wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/trafficcamnet/versions/pruned_v1.0/files/trafficnet_int8.txt \
      -O $HOME/trafficcamnet/trafficnet_int8.txt
    
      ## Run Application
    
      xhost +
      sudo docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.1/samples/models/tlt_pretrained_models \
      -w /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.1-21.02-samples \
      deepstream-app -c deepstream_app_source1_trafficcamnet.txt
    
  2. Install deepstream on your local host and run the deepstream-app.

    To deploy this model with DeepStream, please follow the instructions below:

    Download and install DeepStream SDK. The installation instructions for DeepStream are provided in DeepStream development guide. The config files for the purpose-built models are located in:

      /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models
    

    /opt/nvidia/deepstream is the default DeepStream installation directory. This path will be different if you are installing in a different directory.

    You will need 2 config files and 1 label file. These files are provided in the tlt_pretrained_models directory.

    deepstream_app_source1_trafficcamnet.txt - Main config file for DeepStream app
    config_infer_primary_trafficcamnet.txt - File to configure inference settings
    labels_trafficnet.txt - Label file with 3 classes
    

    Key Parameters in config_infer_primary_trafficcamnet.txt

    tlt-model-key
    tlt-encoded-model
    labelfile-path
    int8-calib-file
    input-dims
    num-detected-classes
    

    Run deepstream-app:

    deepstream-app -c deepstream_app_source1_trafficcamnet.txt
    

    Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.

Technical blogs

  • Access the latest in Vision AI development workflows with NVIDIA TAO Toolkit 5.0
  • Improve accuracy and robustness of vision ai models with vision transformers and NVIDIA TAO
  • Train like a ‘pro’ without being an AI expert using TAO AutoML
  • Create Custom AI models using NVIDIA TAO Toolkit with Azure Machine Learning
  • Developing and Deploying AI-powered Robots with NVIDIA Isaac Sim and NVIDIA TAO
  • Learn endless ways to adapt and supercharge your AI workflows with TAO - Whitepaper
  • Customize Action Recognition with TAO and deploy with DeepStream
  • Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
  • Learn how to train real-time License plate detection and recognition app with TAO and DeepStream.
  • Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO

Suggested reading

  • More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
  • TAO documentation
  • Read the TAO getting Started guide and release notes.
  • If you have any questions or feedback, please refer to the discussions on TAO Toolkit Developer Forums
  • Deploy your models for video analytics application using DeepStream. Learn more about DeepStream SDK
  • Deploy your models in Riva for ConvAI use case.

Ethical Considerations

Training and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.