NGC Catalog
CLASSIC
Welcome Guest
Models
Pre-trained Deformable DETR ImageNet weights

Pre-trained Deformable DETR ImageNet weights

For downloads and more information, please view on a desktop device.
Logo for Pre-trained Deformable DETR ImageNet weights
Description
Pre-trained deformable_detr weights trained on ImageNet to facilitate transfer learning using TAO Toolkit.
Publisher
-
Latest Version
gcvit_base_imagenet1k
Modified
October 16, 2023
Size
350.51 MB

TAO Pretrained Non-commercial Backbone for Deformable DETR

What is Train Adapt Optimize (TAO) Toolkit?

Train Adapt Optimize (TAO) Toolkit is a Python-based AI toolkit for taking purpose-built pre-trained AI models and customizing them with your own data. TAO adapts popular network architectures and backbones to your data, allowing you to train, fine tune, prune, and export highly optimized and accurate AI models for edge deployment.

Pre-trained models accelerate the AI training process and reduce costs associated with large scale data collection, labeling, and training models from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection, and more.

Build end-to-end services and solutions for transforming pixels and sensor data to actionable insights using TAO DeepStream SDK and TensorRT. These models are suitable for object detection, classification, and segmentation.

Deformable-DETR Based Object Detection

Object detection is a popular computer vision technique that can detect one or multiple objects in a frame. Object detection will recognize the individual objects in an image and places bounding boxes around the object. This model card contains pretrained weights that may be used as a starting point with the Deformable-DETR object detection networks in Train Adapt Optimize (TAO) Toolkit to facilitate transfer learning.

It is trained on the ImageNet-1K. Following backbones are supported with Deformable-DETR networks.

Supported Backbone:

  • resnet_50
  • gc_vit_xxtiny / gc_vit_xtiny / gc_vit_tiny / gc_vit_small / gc_vit_base / gc_vit_large / gc_vit_large_384

Model Versions

  • gcvit_xxtiny_imagenet1k - ImageNet1K pre-trained GCViT-xxTiny model for finetune.
  • gcvit_xtiny_imagenet1k - ImageNet1K pre-trained GCViT-xTiny model for finetune.
  • gcvit_tiny_imagenet1k - ImageNet1K pre-trained GCViT-Tiny model for finetune.
  • gcvit_small_imagenet1k - ImageNet1K pre-trained GCViT-Small model for finetune.
  • gcvit_base_imagenet1k - ImageNet1K pre-trained GCViT-Base model for finetune.
  • gcvit_large_imagenet1k - ImageNet1K pre-trained GCViT-Large model for finetune.
  • gcvit_large_imagenet22k_384 - ImageNet22k pre-trained GCViT-Large model for finetune.

Instructions to Use Pretrained Backbone Models with TAO

To use these models as pretrained backbone weights for transfer learning, use the snippet below as a template for the model and train component of the experiment spec file to train a Deformable DETR model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

model:
  pretrained_backbone_path: /path/to/the/resnet50.pth
  backbone: resnet_50
  train_backbone: True
  num_feature_levels: 4
  dec_layers: 6
  enc_layers: 6
  num_queries: 300
  with_box_refine: True
  dropout_ratio: 0.3

Other TAO Pre-trained Models

  • Get TAO Object Detection pre-trained models for YOLOV4, YOLOV3, FasterRCNN, SSD, DSSD, and RetinaNet architectures from NGC model registry

  • Get TAO DetectNet_v2 Object Detection pre-trained models for DetectNet_v2 architecture from NGC model registry

  • Get TAO EfficientDet Object Detection pre-trained models for DetectNet_v2 architecture from NGC model registry

  • Get TAO Instance segmentation pre-trained models for MaskRCNN architecture from NGC

  • Get TAO Semantic segmentation pre-trained models for UNet architecture from NGC

  • Get Purpose-built models from NGC model registry:

    • PeopleNet
    • TrafficCamNet
    • DashCamNet
    • FaceDetectIR
    • VehicleMakeNet
    • VehicleTypeNet
    • PeopleSegNet
    • PeopleSemSegNet
    • License Plate Detection
    • License Plate Recognition
    • Gaze Estimation
    • Facial Landmark
    • Heart Rate Estimation
    • Gesture Recognition
    • Emotion Recognition
    • FaceDetect
    • 2D Body Pose Net
    • ActionRecognitionNet

License

This work is licensed under the Creative Commons Attribution NonCommercial ShareAlike 4.0 License (CC-BY-NC-SA-4.0). To view a copy of this license, please visit this link, or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

Technical blogs

  • Access the latest in Vision AI model development workflows with NVIDIA TAO Toolkit 5.0
  • Improve accuracy and robustness of vision AI apps with Vision Transformers and NVIDIA TAO
  • Train like a ‘pro’ without being an AI expert using TAO AutoML
  • Create Custom AI models using NVIDIA TAO Toolkit with Azure Machine Learning
  • Developing and Deploying AI-powered Robots with NVIDIA Isaac Sim and NVIDIA TAO
  • Learn endless ways to adapt and supercharge your AI workflows with TAO - Whitepaper
  • Customize Action Recognition with TAO and deploy with DeepStream
  • Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
  • Learn how to train real-time License plate detection and recognition app with TAO and DeepStream.
  • Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO

Suggested reading

  • More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
  • Read the TAO getting Started guide and release notes.
  • If you have any questions or feedback, please refer to the discussions on TAO Toolkit Developer Forums
  • Deploy your model on the edge using DeepStream. Learn more about DeepStream SDK

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.