TrafficCamNet detects cars, persons, road signs and two-wheelers in an image. This model is ready for commercial use.
Architecture Type: Convolution Neural Network (CNN)
Network Architecture: DetectNet_v2 + ResNet18 (Feature Extractor)
The model is based on NVIDIA DetectNet_v2 detector with ResNet18 as a feature extractor. This architecture, also known as GridBox object detection, uses bounding-box regression on a uniform grid on the input image. Gridbox system divides an input image into a grid which predicts four normalized bounding-box parameters (xc, yc, w, h) and confidence value per output class. The raw normalized bounding-box and confidence detections needs to be post-processed by a clustering algorithm such as DBSCAN or NMS to produce final bounding-box coordinates and category labels.
Input Type(s): Images
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: 4D
Other Properties Related to Input: RGB Resolution: 960 X 544 X 3 (W x H x C) Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (544), W = Width of the images (960) Input scale: 1/255.0 Mean subtraction: None; No minimum bit depth, alpha, or gamma.
Output Type(s): Label(s), Bounding-Box(es), Confidence Scores
Output Format: Label: Text String(s); Bounding Box: (x-coordinate, y-coordinate, width, height), Confidence Scores: Floating Point
Other Properties Related to Output: Category Label(s): persons, road signs, two-wheelers, and vehicles; Bounding Box Coordinate(s); Confidence Score(s)
Runtime Engine(s):
Supported Hardware Architecture Compatibility:
Preferred Operating System(s):
The majority of the training dataset was collected and labeled in-house from images from a variety of dashcams and the remaining were taken from traffic cameras in a city in the US.
Two proprietary, internal datasets labeled in-house were used.
Object | Distribution | ||||
---|---|---|---|---|---|
Environment | Images | Cars | Persons | Road Signs | Two-Wheelers |
Dashcam (5ft height) | 40,000 | 1.7M | 720,000 | 354,127 | 54,000 |
Traffic-signal content | 160,000 | 1.1M | 53500 | 184000 | 11000 |
Total | 20,000 | 2.8M | 773,500 | 538,127 | 65,000 |
Training Data Ground-truth Labeling Guidelines
19,000 internal proprietary images identical in character to those from the training datasets referenced above.
The true positives, false positives, false negatives are calculated using intersection-over-union (IOU) criterion greater than 0.5. The KPI for the evaluation data are reported in the table below. Model is evaluated based on precision, recall and accuracy.
The intended use of this model is to detect cars and with that in mind, key performance indicators (KPI) are calculated for car class only. The other classes - road signs, two-wheelers and persons are not factored in the model evaluation.
Model | TrafficcamNet | ||
---|---|---|---|
Content | Precision | Recall | Accuracy |
TrafficcamNet | 92.65 | 89.95 | 83.9 |
The inference is run on the provided pruned model at INT8 precision. On Jetson Nano, FP16 precision is used. The inference performance is run using trtexec
on Jetson Nano, AGX Xavier, Xavier NX and NVIDIA T4 GPU. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.
These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.
The primary use case intended for these models is detecting people in a color (RGB) image. The model can be used to detect people from photos and videos by using appropriate video or image decoding and pre-processing. As a secondary use case the model can also be used to detect bags and faces from images or videos. However, these additional classes are not the main intended use for these models.
There are two flavors of these models:
The unpruned model is intended for training using TAO Toolkit and the user's own dataset. This can provide high fidelity models that are adapted to the use case. The Jupyter notebook available as a part of TAO container can be used to re-train.
The pruned model is intended for efficient deployment on the edge using DeepStream SDK or TensorRT. This model accepts 960x544x3 dimension input tensors and outputs 60x34x16 bbox coordinate tensor and 60x34x4 class confidence tensor. DeepStream provides a toolkit to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference. DeepStream will then post-process the output bbox coordinate tensor and class confidence tensors with NMS or DBScan clustering algorithm to create appropriate bounding boxes. The sample application and config file to run this model are provided in DeepStream SDK.
The unpruned andpruned models are encrypted and will only operate with the following key:
Please make sure to use this as the key for all TAO commands that require a model load key.
In order, to use this model as a pretrained weights for transfer learning, please use the below mentioned snippet as template for the model_config
component of the experiment spec file to train a DetectNet_v2 model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.
model_config {
num_layers: 18
pretrained_model_file: "/path/to/the/model.tlt"
use_batch_norm: true
objective_set {
bbox {
scale: 35.0
offset: 0.5
}
cov {
}
}
training_precision {
backend_floatx: FLOAT32
}
arch: "resnet"
all_projections: true
}
In order, to use this model as a pretrained weights for transfer learning, please use the below mentioned snippet as template for the model_config
component of the experiment spec file to train a DetectNet_v2 model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.
model_config {
num_layers: 18
pretrained_model_file: "/path/to/the/model.tlt"
use_batch_norm: true
objective_set {
bbox {
scale: 35.0
offset: 0.5
}
cov {
}
}
training_precision {
backend_floatx: FLOAT32
}
arch: "resnet"
all_projections: true
}
To create the entire end-to-end video analytics application, deploy this model with DeepStream SDK. DeepStream SDK is a streaming analytics toolkit to accelerate deployment of AI-based video analytics applications. The pruned model included here can be integrated directly into deepstream by following the instructions mentioned below.
Run the default deepstream-app
included in the DeepStream docker, by simply executing the commands below.
## Download Model:
mkdir -p $HOME/trafficcamnet && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/trafficcamnet/versions/pruned_v1.0/files/resnet18_trafficcamnet_pruned.etlt \
-O $HOME/trafficcamnet/resnet18_trafficcamnet_pruned.etlt && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/trafficcamnet/versions/pruned_v1.0/files/trafficnet_int8.txt \
-O $HOME/trafficcamnet/trafficnet_int8.txt
## Run Application
xhost +
sudo docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.1/samples/models/tlt_pretrained_models \
-w /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.1-21.02-samples \
deepstream-app -c deepstream_app_source1_trafficcamnet.txt
Install deepstream on your local host and run the deepstream-app.
To deploy this model with DeepStream, please follow the instructions below:
Download and install DeepStream SDK. The installation instructions for DeepStream are provided in DeepStream development guide. The config files for the purpose-built models are located in:
/opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models
/opt/nvidia/deepstream
is the default DeepStream installation directory. This path will be different if you are installing in a different directory.
You will need 2 config files and 1 label file. These files are provided in the tlt_pretrained_models
directory.
deepstream_app_source1_trafficcamnet.txt - Main config file for DeepStream app
config_infer_primary_trafficcamnet.txt - File to configure inference settings
labels_trafficnet.txt - Label file with 3 classes
Key Parameters in config_infer_primary_trafficcamnet.txt
tlt-model-key
tlt-encoded-model
labelfile-path
int8-calib-file
input-dims
num-detected-classes
Run deepstream-app
:
deepstream-app -c deepstream_app_source1_trafficcamnet.txt
Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.
Training and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.