Pre-trained DINO NvImageNet weights

Pre-trained DINO NvImageNet weights

Logo for Pre-trained DINO NvImageNet weights
Description
Pre-trained DINO weights trained on NvImageNet to facilitate transfer learning using TAO Toolkit.
Publisher
-
Latest Version
resnet50
Modified
October 16, 2023
Size
292.89 MB

TAO Pretrained Commercial Backbone for DINO

What is Train Adapt Optimize (TAO) Toolkit?

Train Adapt Optimize (TAO) Toolkit is a Python-based AI toolkit for taking purpose-built pre-trained AI models and customizing them with your own data. TAO adapts popular network architectures and backbones to your data, allowing you to train, fine tune, prune, and export highly optimized and accurate AI models for edge deployment.

Pre-trained models accelerate the AI training process and reduce costs associated with large scale data collection, labeling, and training models from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection, and more.

Build end-to-end services and solutions for transforming pixels and sensor data to actionable insights using TAO DeepStream SDK and TensorRT. These models are suitable for object detection, classification, and segmentation.

DINO Based Object Detection

Object detection is a popular computer vision technique that can detect one or multiple objects in a frame. Object detection will recognize the individual objects in an image and places bounding boxes around the object. This model card contains pretrained weights that may be used as a starting point with the DINO object detection networks in Train Adapt Optimize (TAO) Toolkit to facilitate transfer learning.

It is trained on the NVImageNet that is permitted for commercial uses. Following backbones are supported with DINO networks.

Supported Backbone:

  • resnet_50
  • gc_vit_xxtiny / gc_vit_xtiny / gc_vit_tiny / gc_vit_small / gc_vit_base / gc_vit_large / gc_vit_large_384
  • fan_tiny / fan_small / fan_base

Model Versions

  • resnet50 - NVImageNet pre-trained ResNet-50 model for finetune.
  • gcvit_xxtiny_nvimagenet - NVImageNet pre-trained GCViT-xxTiny model for finetune.
  • gcvit_xtiny_nvimagenet - NVImageNet pre-trained GCViT-xTiny model for finetune.
  • gcvit_tiny_nvimagenet - NVImageNet pre-trained GCViT-Tiny model for finetune.
  • gcvit_small_nvimagenet - NVImageNet pre-trained GCViT-Small model for finetune.
  • gcvit_base_nvimagenet - NVImageNet pre-trained GCViT-Base model for finetune.
  • fan_hybrid_tiny_nvimagenet - ImageNet22k pre-trained FAN-Hybrid-Tiny model for finetune. (224 resolution)
  • fan_small_hybrid_nvimagenet - ImageNet22k pre-trained FAN-Hybrid-Small model for finetune. (224 resolution)
  • fan_base_hybrid_nvimagenet - ImageNet22K pre-trained FAN-Hybrid-Base model finetuned on ImageNet-1k.
  • fan_large_hybrid_nvimagenet - ImageNet22K pre-trained FAN-Hybrid-Base model for finetune. (224 resolution)

Instructions to Use Pretrained Backbone Models with TAO

To use these models as pretrained backbone weights for transfer learning, use the snippet below as a template for the model and train component of the experiment spec file to train a DINO model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

model:
  pretrained_backbone_path: /path/to/the/resnet50.pth
  backbone: resnet_50
  train_backbone: True
  num_feature_levels: 4
  dec_layers: 6
  enc_layers: 6
  num_queries: 900
  dropout_ratio: 0.0
  dim_feedforward: 2048

Other TAO Pre-trained Models

License

The licenses to use this model is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses

Technical blogs

Suggested reading

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.