ImageNet NV-DINOV2 Model Card
Image Classification is a popular computer vision technique in which an image is classified into one of the designated classes based on the image features.
NV-Dinov2 is a visual foundational model trained on NVIDIA proprietary large scale dataset. Dinov2 is a self-supervised learning method that uses a combination of two SSL techniques : DINO and iBOT. These models could greatly simplify the use of images in any system by producing all purpose visual features, i.e., features that work across image distributions and tasks without finetuning. Trained on large curated datasets, our model has learnt robust fine-grained representation useful for localization and classification tasks. This model can be used as a foundation model for a variety of downstream tasks with few labeled examples. For more details on the method please refer: Dinov2.
The model in this model card uses the NVDinov2 pre-trained ViT-L backbone and fine-tunes a Linear Regression Classifier on ImageNet dataset.
NV-Dinov2 was pre-trained on NVIDIA proprietary collected data that are of commercial license. The model with linear probe head was fine-tuned on ImageNet dataset ImageNet1K dataset.
The evaluation data is ImagetNet-1k.ImageNet1K dataset.
Methodology and KPI
The performance is measure in top-1 accuracy on Imagenet.
|NV-Dinov2 Imagenet Model
The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using
trtexec. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.
NVDinoV2 (224x224 resolution)
|Orin NX 16GB
|AGX Orin 64GB
These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with DeepStream SDK.
The model can be used to classify objects from photos and videos by using appropriate video or image decoding and pre-processing. The model is a binary classifier which predicts whether a component is present or missing.
The models are intended for easy edge deployment using DeepStream SDK. DeepStream provides the facilities to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference.
RGB Image of dimensions: 224 X 224 X 3 (W x H x C)
Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (224), W = Width of the images (224)
The output is a 1000 x 1 output where each value is the confidence score of respective class.
Instructions to deploy these models with DeepStream
To create an end-to-end video analytics application, deploy this model with DeepStream SDK. DeepStream SDK is a streaming analytics toolkit to accelerate deployment of AI-based video analytics applications. The model can be integrated directly into deepstream by following the instructions mentioned below.
To deploy these models with DeepStream 6.1, please follow the instructions below:
/opt/nvidia/deepstream is the default DeepStream installation directory. This path will be different if you are installing in a different directory.
Two extra files are required which are provided in NVIDIA-AI-IOT.
A label file: containing the names of the classes that the model is trained to classify against. The order in which the classes are listed must match the order in which the model predicts the output. Here is a sample file for the Imagenet Classification model:
A DeepStream configuration file: Here are the key parameters in
ds-tao-classifier -c configs/multi_task_tao/pgie_multi_task_tao_config.txt -i file:///path/to/img.jpg
Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.
This model was fine-tuned on the ImageNet1K dataset with 1000 object categories. Hence, the model may not perform well on different data distributions, so we recommend conducting further fine tuning on the target domain to get higher accuracy.
- Get TAO Container
- Get other Purpose-built models from NGC model registry:
- Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
- Learn how to train real-time License plate detection and recognition app with TAO and DeepStream.
- Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO
- Learn how to train Instance segmentation model using MaskRCNN with TAO
- Read the technical tutorial on how PeopleNet model can be trained with custom data using Transfer Learning Toolkit
- Learn how to train and deploy real-time intelligent video analytics apps and services using DeepStream SDK
- More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
- Read the TAO Quick Start guide and release notes.
- If you have any questions or feedback, please refer to the discussions on TAO Toolkit Developer Forums
- Deploy your model on the edge using DeepStream. Learn more about DeepStream SDK
This work is licensed under the Creative Commons Attribution NonCommercial ShareAlike 4.0 License (CC-BY-NC-SA-4.0). To view a copy of this license, please visit this link, or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developers to ensure that it meets the requirements for the relevant industry and use case, that the necessary instructions and documentation are provided to understand error rates, confidence intervals, and results, and that the model is being used under the conditions and in the manner intended.