Pre-trained GCViT ImageNet Classification weights

Pre-trained GCViT ImageNet Classification weights

Logo for Pre-trained GCViT ImageNet Classification weights
Description
Pre-trained GCViT weights trained on ImageNet to facilitate transfer learning using TAO Toolkit.
Publisher
-
Latest Version
gcvit_large_imagenet22k_384
Modified
October 16, 2023
Size
824.03 MB

TAO Non-commercial Pretrained GCViT Classification Model

What is Train Adapt Optimize (TAO) Toolkit?

Train Adapt Optimize (TAO) Toolkit is a Python-based AI toolkit for customizing purpose-built pre-trained AI models with your own data. TAO Toolkit adapts popular network architectures and backbones for your data, allowing you to train, fine tune, prune, and export highly optimized and accurate AI models for edge deployment.

The pre-trained models accelerate the AI training process and reduce costs associated with large-scale data collection, labeling, and training from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection, and more.

Build end-to-end services and solutions for transforming pixels and sensor data to actionable insights using the TAO DeepStream SDK and TensorRT. These models are suitable for object detection, classification, and segmentation.

Model Overview

Image Classification is a popular computer vision technique in which an image is classified into one of the designated classes based on the image features. This model card contains pretrained weights for most of the popular classification models. These weights can be used as a starting point with the classification app in TAO Toolkit to facilitate transfer learning.

Model Architecture

GCViT (Global Context Vision Transformer) is a transformer-based family of backbone from NVIDIA research that achieves SOTA in ImageNet-1K classification. This family of backbone leverages global context self-attention, joint with standard local self-attention to effectively and efficiently model both long and short-range spatial interactions. GCViT can be used for image classification tasks as well as downstream tasks such as object detection. Use GCViT when you wan to achieve SOTA accuracy on your target dataset while using fewer FLOPs compared to other Vision Transformers such as Swin and ConvNext.

Training

This model was trained using the classification_pyt entrypoint in TAO. The training algorithm optimizes the network to minimize cross-entropy loss.

Training Data

Most of the GCViT models were trained on ImageNet1K dataset. GCViT-Large was trained on ImageNet22k dataset

Performance

Evaluation Data

The GCViT models have been evalued on the ImageNet1K validation dataset.

Methodology and KPI

The key performance indicator is accuracy, following the standard evaluation protocol for image classification. The KPI for the evaluation data are reported below.

model top-1 Accuracy
gcvit_xxtiny 0.796
gcvit_xtiny 0.82
gcvit_tiny 0.834
gcvit_small 0.839
gcvit_base 0.845
gcvit_large 0.848
gcvit_large_imagenet22k 0.874

Real-time Inference Performance

The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using trtexec on Jetson AGX Xavier, Xavier NX, Orin, Orin NX, NVIDIA T4, and Ampere GPUs. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is for inference only. End-to-end performance with streaming video data might vary slightly depending on other bottlenecks in the hardware and software.

GC-ViT-L

Platform BS FPS
Jetson Orin Nano 4 19.8
Orin NX 16GB 4 28.8
AGX Orin 64GB 8 80.7
A2 16 67
T4 16 122
A30 16 388
L4 8 268
L40 4 628
A100 64 917
H100 64 1618

How to Use This Model

These models must be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with TAO Toolkit, the DeepStream SDK, or TensorRT.

The primary use case for these models is classifying objects in a color (RGB) image. They can be used to classify objects from photos and videos by using appropriate video or image decoding and pre-processing.

These models are intended for training and fine-tune using TAO Toolkit and user datasets for image classification. High-fidelity models can be trained to new use cases. A Jupyter notebook is available as a part of the TAO container and can be used to re-train the models.

The models are also intended for easy edge deployment using DeepStream SDK or TensorRT. DeepStream provides the facilities to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference.

Input

  • B X 3 X 224 X 224 (B C H W)
  • B X 3 X 384 X 384 (B C H W) for ImageNet22K trained model

Output

Category labels (1000 classes) for the input image.

Instructions to Use Pretrained Models with TAO

To use these models as pretrained weights for transfer learning, please use the snippet below as a template for the model and train component of the experiment spec file to train a GCViT Classification model. For more information on the experiment spec file, refer to the TAO Toolkit User Guide.

model:
  init_cfg:
    checkpoint: /path/to/the/gc_vit_xxtiny.pth
  backbone:
    type: gc_vit_xxtiny
    custom_args:
      use_rel_pos_bias: True
  head:
    type: LinearClsHead

Instructions to Deploy These Models with DeepStream

Documentation to deploy with DeepStream is provided in the "Deploying to DeepStream" chapter of the TAO User Guide.

Limitations

GCViT was trained on the ImageNet1K dataset with 1000 object categories. Hence, the model may not perform well on different data distributions, so we recommend conducting further fine tuning on the target domain to get higher accuracy.

Model Versions

  • gcvit_xxtiny_imagenet1k - ImageNet1K pre-trained GCViT-xxTiny model for finetune.
  • gcvit_xtiny_imagenet1k - ImageNet1K pre-trained GCViT-xTiny model for finetune.
  • gcvit_tiny_imagenet1k - ImageNet1K pre-trained GCViT-Tiny model for finetune.
  • gcvit_small_imagenet1k - ImageNet1K pre-trained GCViT-Small model for finetune.
  • gcvit_base_imagenet1k - ImageNet1K pre-trained GCViT-Base model for finetune.
  • gcvit_large_imagenet1k - ImageNet1K pre-trained GCViT-Large model for finetune.
  • gcvit_large_imagenet22k_384 - ImageNet22k pre-trained GCViT-Large model for finetune.

Reference

Citations

  • Hatamizadeh, A., Yin, H., Heinrich, G., Kautz, J., Molchanov, P.: Global Context Vision Transformers

Using TAO Pre-trained Models

License

This work is licensed under the Creative Commons Attribution NonCommercial ShareAlike 4.0 License (CC-BY-NC-SA-4.0). To view a copy of this license, please visit this link, or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

Technical blogs

Suggested reading

Ethical Considerations

NVIDIA platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developers to ensure that it meets the requirements for the relevant industry and use case, that the necessary instructions and documentation are provided to understand error rates, confidence intervals, and results, and that the model is being used under the conditions and in the manner intended.