Train Adapt Optimize (TAO) Toolkit is a Python-based AI toolkit for customizing purpose-built pre-trained AI models with your own data. TAO Toolkit adapts popular network architectures and backbones to your data, allowing you to train, fine tune, prune, and export highly optimized and accurate AI models for edge deployment.
The pre-trained models accelerate the AI training process and reduce costs associated with large scale data collection, labeling, and training models from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection and more.
Build end-to-end services and solutions for transforming pixels and sensor data to actionable insights using the TAO DeepStream SDK and TensorRT. These models are suitable for object detection, classification, and segmentation.
Object detection is a popular computer vision technique that can detect one or multiple objects in a frame. Object detection will recognize the individual objects in an image and place bounding boxes around the object. This model card contains pre-trained weights for the DINO object detection networks trained on COCO dataset to facilitate transfer learning through the Train Adapt Optimize (TAO) Toolkit.
The models in this instance are object detectors that take RGB images as input and produce bounding boxes and classes as output. There are four different types of feature extractors for DINO: ResNet50, GCViT-Tiny, FAN-Small, and FAN-Large. The backbone networks are classification networks that were pre-trained on the ImageNet1K dataset--except for FAN-Large, which was pre-trained on the ImageNet22K dataset with an input resolution of 384 x 384 for higher accuracy.
This model was trained using the DINO entrypoint in TAO. The training algorithm optimizes the network to minimize the localization and confidence loss for the objects.
DINO was trained on the COCO 2017 dataset, which contains 118K training images and 5K validation images with corresponding annotation files. The annotation contains bounding boxes for 80 object categories.
We have tested the DINO model on the COCO 2017 validation dataset.
The key performance indicator is the mean average precision (mAP), following the standard evaluation protocol for object detection. The KPI for the evaluation data are reported below.
model | precision | mAP | mAP50 | mAP75 | mAPs | mAPm | mAPl |
---|---|---|---|---|---|---|---|
dino_resnet_50 | FP32 | 48.8 | 66.9 | 53.4 | 31.8 | 51.8 | 63.4 |
dino_gcvit_tiny | FP32 | 50.7 | 68.9 | 55.3 | 33.2 | 54.1 | 65.8 |
dino_fan_small | FP32 | 53.1 | 71.2 | 57.8 | 35.2 | 56.4 | 68.9 |
dino_fan_large | FP16 | 56.9 | 76.1 | 62.3 | 40.5 | 60.5 | 73.7 |
The inference is run on the provided model at FP16 precision. The inference performance is run using trtexec
on Jetson AGX Xavier, Xavier NX, Orin, Orin NX and NVIDIA T4, and Ampere GPUs. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.
DINO + RN50
Platform | BS | FPS |
---|---|---|
Jetson Orin Nano | 1 | 5.7 |
Orin NX 16GB | 1 | 8.4 |
AGX Orin 64GB | 4 | 22 |
A2 | 1 | 22.5 |
T4 | 4 | 38.9 |
A30 | 8 | 115 |
L4 | 1 | 79.6 |
L40 | 1 | 215 |
A100 | 32 | 244 |
H100 | 32 | 442 |
DINO + FAN-S
Platform | BS | FPS |
---|---|---|
Jetson Orin Nano | 1 | 3.1 |
Orin NX 16GB | 1 | 4.4 |
AGX Orin 64GB | 4 | 11.2 |
A2 | 1 | 11.7 |
T4 | 4 | 20 |
A30 | 8 | 56 |
L4 | 1 | 44 |
L40 | 1 | 119.5 |
A100 | 32 | 121 |
H100 | 32 | 213 |
DINO + GCViT-T
Platform | BS | FPS |
---|---|---|
Jetson Orin Nano | 1 | 3.3 |
Orin NX 16GB | 1 | 4.9 |
AGX Orin 64GB | 4 | 13 |
A2 | 1 | 15.7 |
T4 | 8 | 26.7 |
A30 | 8 | 77 |
L4 | 1 | 56.6 |
L40 | 1 | 151 |
A100 | 32 | 165 |
H100 | 32 | 290 |
DINO + FAN-L
Platform | BS | FPS |
---|---|---|
Jetson Orin Nano | 1 | 1.8 |
Orin NX 16GB | 1 | 2.6 |
AGX Orin 64GB | 1 | 6.2 |
A2 | 1 | 6.7 |
T4 | 4 | 10.9 |
A30 | 8 | 33.4 |
L4 | 1 | 26.4 |
L40 | 1 | 68.5 |
A100 | 16 | 70.6 |
H100 | 32 | 125.5 |
These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.
The primary use case for these models is detecting objects in a color (RGB) image. The model can be used to detect objects from photos and videos by using appropriate video or image decoding and pre-processing.
These models are intended for training and fine-tune using TAO Toolkit and user datasets for object detection. High-fidelity models can be trained to the new use cases. A Jupyter notebook is available as a part of the TAO container and can be used to re-train.
The models are also intended for easy edge deployment using DeepStream SDK or TensorRT. DeepStream provides the facilities to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference.
B X 3 X 544 X 960 (B C H W)
Category labels (80 COCO) and bounding-box coordinates for each detected objects in the input image.
To use these models as pretrained weights for transfer learning, please use the snippet below as template for the model
and train
component of the experiment spec file to train a DINO model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.
train:
pretrained_model_path: /path/to/the/dino_resnet_50.pth
model:
backbone: resnet_50
train_backbone: True
num_feature_levels: 4
dec_layers: 6
enc_layers: 6
num_queries: 900
dropout_ratio: 0.0
dim_feedforward: 2048
Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.
DINO was trained on the COCO dataset with 80 object categories. Hence the model may not perform well on different data distributions, so we recommend conducting further finetuning on the target domain to get higher mAP.
This work is licensed under the Creative Commons Attribution NonCommercial ShareAlike 4.0 License (CC-BY-NC-SA-4.0). To view a copy of this license, please visit this link, or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developers to ensure that it meets the requirements for the relevant industry and use case, that the necessary instructions and documentation are provided to understand error rates, confidence intervals, and results, and that the model is being used under the conditions and in the manner intended.