TAO Commercial Pretrained GCViT Classification Model
Train Adapt Optimize (TAO) Toolkit is a Python-based AI toolkit for customizing purpose-built pre-trained AI models with your own data. TAO Toolkit adapts popular network architectures and backbones for your data, allowing you to train, fine tune, prune, and export highly optimized and accurate AI models for edge deployment.
The pre-trained models accelerate the AI training process and reduce costs associated with large scale data collection, labeling, and training from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection, and more.
Build end-to-end services and solutions for transforming pixels and sensor data to actionable insights using the TAO DeepStream SDK and TensorRT. These models are suitable for object detection, classification, and segmentation.
Image Classification is a popular computer vision technique in which an image is classified into one of the designated classes based on the image features. This model card contains pretrained weights of most of the popular classification models. These weights may be used as a starting point with the classification app in Train Adapt Optimize (TAO) Toolkit to facilitate transfer learning.
GCViT (Global Context Vision Transformer) is a transformer-based family of backbone from NVIDIA research that achieves SOTA in ImageNet-1K classification. This family of backbone leverages global context self-attention, joint with standard local self-attention to effectively and efficiently model both long and short-range spatial interactions. GCViT can be used for image classification tasks as well as downstream tasks such as object detection. Use GCViT when you wan to achieve SOTA accuracy on your target dataset while using fewer FLOPs compared to other Vision Transformers such as Swin and ConvNext. These classification networks have been pre-trained on the NVImageNet dataset.
The NVImageNet dataset has category names aligned with the original ImageNet-1k category names. The original ImageNet-1K is limited to non-commercial use only, but many recent pretraining techniques show the benefits of pretraining on the ImageNet dataset first and then fine-tuning on downstream tasks. Thus, a model that is pretrained on the original ImageNet dataset may not be allowed to train models for NVIDIA products. On the other hand, the NVImageNet dataset is permitted for commercial uses. The NVImageNet dataset is collected from 84 websites that allow images to be used commercially, as well as Bing image search, which is constrained to only return results that are free to share and use commercially.
This model was trained using the
classification_pyt entrypoint in TAO. The training algorithm optimizes the network to minimize cross-entropy loss.
The GCViT models were trained on the NVImageNet dataset.
The GCViT models have been evaluated on the ImageNet1K validation dataset.
Methodology and KPI
The key performance indicator is the accuracy, following the standard evaluation protocol for image classification. The KPI for the evaluation data are reported below.
Inference is run on the provided unpruned model at FP16 precision. The inference performance is run using
trtexec on Jetson AGX Xavier, Xavier NX, Orin, Orin NX, NVIDIA T4, and Ampere GPUs. The Jetson devices are run at Max-N configuration for maximum GPU frequency. The performance shown here is for inference only. End-to-end performance with streaming video data might vary slightly depending on other bottlenecks in the hardware and software.
|Jetson Orin Nano
|Orin NX 16GB
|AGX Orin 64GB
How to Use This Model
These models must be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU, including NVIDIA Jetson devices. These models can only be used with TAO Toolkit, the DeepStream SDK, or TensorRT.
The primary use case for these models is classifying objects in a color (RGB) image. They can be used to classify objects from photos and videos by using appropriate video or image decoding and pre-processing.
These models are intended for training and fine-tune using TAO Toolkit and user datasets for image classification. High-fidelity models can be trained for the new use cases. A Jupyter notebook is available as a part of the TAO container and can be used to re-train.
The models are also intended for easy edge deployment using DeepStream SDK or TensorRT. DeepStream provides the facilities to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference.
- B X 3 X 224 X 224 (B C H W)
Category labels (1000 classes) for the input image.
Instructions to Use Pretrained Models with TAO
To use these models as pretrained weights for transfer learning, use the snippet below as a template for the
train component of the experiment spec file for training a GCViT Classification model. For more information on the experiment spec file, refer to the TAO Toolkit User Guide.
Instructions to Deploy These Models with DeepStream
Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.
GCViT was trained on the NVImageNet dataset with 1000 object categories. Hence, the model may not perform well on different data distributions, so we recommend conducting further fine tuning on the target domain to get higher accuracy.
- gcvit_xxtiny_nvimagenet - NVImageNet pre-trained GCViT-xxTiny model for finetune.
- gcvit_xtiny_nvimagenet - NVImageNet pre-trained GCViT-xTiny model for finetune.
- gcvit_tiny_nvimagenet - NVImageNet pre-trained GCViT-Tiny model for finetune.
- gcvit_small_nvimagenet - NVImageNet pre-trained GCViT-Small model for finetune.
- gcvit_base_nvimagenet - NVImageNet pre-trained GCViT-Base model for finetune.
- Hatamizadeh, A., Yin, H., Heinrich, G., Kautz, J., Molchanov, P.: Global Context Vision Transformers
- Get the TAO Container
- Get other purpose-built models from the NGC model registry:
The licenses to use this model is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses
- Access the latest in Vision AI model development workflows with NVIDIA TAO Toolkit 5.0
- Improve accuracy and robustness of vision AI apps with Vision Transformers and NVIDIA TAO
- Train like a ‘pro’ without being an AI expert using TAO AutoML
- Create Custom AI models using NVIDIA TAO Toolkit with Azure Machine Learning
- Develop and Deploy AI-powered Robots with NVIDIA Isaac Sim and NVIDIA TAO
- Learn endless ways to adapt and supercharge your AI workflows with TAO - Whitepaper
- Customize Action Recognition with TAO and deploy with DeepStream
- Read the two-part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
- Learn how to train a real-time License plate detection and recognition app with TAO and DeepStream.
- Model accuracy is extremely important; learn how you can achieve state of the art accuracy for classification and object detection models using TAO.
- More information on TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
- Refer to the TAO Toolkit documentation
- Read the TAO Toolkit Quick Start Guide and release notes.
- If you have any questions or feedback, please refer to the discussions on the TAO Toolkit Developer Forums
- Deploy your models for video analytics applications using the DeepStream SDK
- Deploy your models in Riva for ConvAI use cases.
NVIDIA platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developers to ensure that it meets the requirements for the relevant industry and use case, that the necessary instructions and documentation are provided to understand error rates, confidence intervals, and results, and that the model is being used under the conditions and in the manner intended.