NGC Catalog
CLASSIC
Welcome Guest
Models
Pre-trained FasterViT based NVImageNet Classification weights

Pre-trained FasterViT based NVImageNet Classification weights

For downloads and more information, please view on a desktop device.
Logo for Pre-trained FasterViT based NVImageNet Classification weights
Description
Pre-trained FasterViT weights trained on NVImageNet to facilitate transfer learning using TAO Toolkit.
Publisher
-
Latest Version
deployable-fastervit-1-nvimagenet_op17
Modified
December 12, 2023
Size
204.34 MB

TAO Commercial Pretrained FasterViT Classification Model

What is Train Adapt Optimize (TAO) Toolkit?

Train Adapt Optimize (TAO) Toolkit is a Python-based AI toolkit for customizing purpose-built pre-trained AI models with your own data. TAO Toolkit adapts popular network architectures and backbones for your data, allowing you to train, fine-tune, prune, and export highly optimized and accurate AI models for edge deployment.

The pre-trained models accelerate the AI training process and reduce costs associated with large-scale data collection, labeling, and training from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection, and more.

Build end-to-end services and solutions for transforming pixels and sensor data to actionable insights using the TAO DeepStream SDK and TensorRT. These models are suitable for object detection, classification, and segmentation.

Model Overview

Image Classification is a popular computer vision technique in which an image is classified into one of the designated classes based on the image features. This model card contains pretrained weights for most of the popular classification models. These weights can be used as a starting point with the classification app in TAO Toolkit to facilitate transfer learning.

Model Architecture

FasterViT is a hybrid CNN-ViT-based family of backbone from NVIDIA research that achieves SOTA in ImageNet-1K classification with focus on throughput. This family of backbone leverages Hierarchical Attention (HAT) which decomposes global self-attention with quadratic complexity into a multi-level attention with reduced computational cost. FasterViT architecture was designed with TensorRT in mind, so the model throughput is highly optimized in TensorRT SDK. Use FasterViT when you want to achieve SOTA accuracy on your target dataset while achieving higher throughput compared to other Vision Transformers such as Swin and ConvNext.

The NVImageNet dataset has category names aligned with the original ImageNet-1k category names. The original ImageNet-1K is limited to non-commercial use only, but many recent pretraining techniques show the benefits of pretraining on the ImageNet dataset first and then fine-tuning on downstream tasks. Thus, a model that is pretrained on the original ImageNet dataset might not be allowed to train models for NVIDIA products. The NVImageNet dataset is permitted for commercial uses. The NVImageNet dataset is collected from 84 websites that allow images to be used commercially. It also includes data from Bing image search, which only returns results that are free to share and use commercially.

Training

This model was trained using the classification_pyt entrypoint in TAO. The training algorithm optimizes the network to minimize cross-entropy loss.

Training Data

The FasterViT models were trained on the NVImageNet dataset.

Performance

Evaluation Data

The FasterViT models have been evaluated on the ImageNet1K validation dataset.

Methodology and KPI

The key performance indicator is accuracy, following the standard evaluation protocol for image classification. The KPI for the evaluation data are reported below.

Model Top-1 Accuracy
faster_vit_1_224 0.677
faster_vit_2_224 0.684
faster_vit_4_224 0.692

Real-Time Inference Performance

The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using trtexec on Jetson AGX Xavier, Xavier NX, Orin, Orin NX, NVIDIA T4, and Ampere GPUs. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is for inference only. End-to-end performance with streaming video data might vary slightly depending on other bottlenecks in the hardware and software.

FasterViT-1-224

Platform BS FPS
Jetson Orin Nano 4 199
Orin NX 16GB 4 292
AGX Orin 64GB 8 773
A2 32 761
T4 32 1214
A30 16 3490
L4 32 2871
L40 32 8342
A100 32 5653
H100 32 8825

FasterViT-2-224

Platform BS FPS
Jetson Orin Nano 4 231
Orin NX 16GB 4 292
AGX Orin 64GB 8 628
A2 32 544
T4 32 889
A30 16 2719
L4 32 1753
L40 32 5607
A100 32 4698
H100 32 7363

FasterViT-4-224

Platform BS FPS
Jetson Orin Nano 4 43
Orin NX 16GB 4 59
AGX Orin 64GB 8 149
A2 32 117
T4 32 198
A30 16 666
L4 32 421
L40 32 1106
A100 32 1375
H100 32 2510

How to Use This Model

These models must be used with NVIDIA hardware and software. For hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with TAO Toolkit, the DeepStream SDK, or TensorRT.

The primary use case for these models is classifying objects in a color (RGB) image. They can be used to classify objects from photos and videos by using appropriate video or image decoding and pre-processing.

These models are intended for training and fine-tune using TAO Toolkit and user datasets for image classification. High-fidelity models can be trained to new use cases. A Jupyter Notebook is available as a part of the TAO container and can be used to re-train the models.

The models are also intended for easy edge deployment using DeepStream SDK or TensorRT. DeepStream provides the facilities to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference.

Input

  • B X 3 X 224 X 224 (B C H W)

Output

Category labels (1000 classes) for the input image.

Instructions to Use Pretrained Models with TAO

To use these models as pretrained weights for transfer learning, use the following code snippet as a template for the model and train component of the experiment spec file when training a FasterViT Classification model. For more information on the experiment spec file, refer to the TAO Toolkit User Guide.

model:
  init_cfg:
    checkpoint: /path/to/the/faster_vit_0_224.pth
  backbone:
    type: faster_vit_0_224
  head:
    type: LinearClsHead

Instructions to Deploy These Models with DeepStream

Documentation to deploy with DeepStream is provided in the "Deploying to DeepStream" chapter of the TAO User Guide.

Limitations

FasterViT was trained on the NVImageNet dataset with 1000 object categories. Hence, the model might not perform well on different data distributions. We recommend conducting further fine-tuning on the target domain to get higher accuracy rates.

Model Versions

  • fastervit_0_224_1k - ImageNet1K pre-trained FasterViT-0-224 model for finetune.
  • fastervit_1_224_1k - ImageNet1K pre-trained FasterViT-1-224 model for finetune.
  • fastervit_2_224_1k - ImageNet1K pre-trained FasterViT-2-224 model for finetune.
  • fastervit_3_224_1k - ImageNet1K pre-trained FasterViT-3-224 model for finetune.
  • fastervit_4_224_1k - ImageNet1K pre-trained FasterViT-4-224 model for finetune.
  • fastervit_5_224_1k - ImageNet1K pre-trained FasterViT-5-224 model for finetune.
  • fastervit_6_224_1k - ImageNet1K pre-trained FasterViT-6-224 model for finetune.
  • fastervit_4_21k_224_w14 - ImageNet22k pre-trained FasterViT-4-224 model for finetune.
  • fastervit_4_21k_384_w24 - ImageNet22k pre-trained FasterViT-4-384 model for finetune.
  • fastervit_4_21k_512_w32 - ImageNet22k pre-trained FasterViT-4-512 model for finetune.
  • fastervit_4_21k_768_w48 - ImageNet22k pre-trained FasterViT-4-768 model for finetune.

Reference

Citations

  • Hatamizadeh, A., Heinrich, G., Yin, H., Tao, A., Alvarez, J., Kautz, J., Molchanov, P.: FasterViT: Fast Vision Transformers with Hierarchical Attention.

Using TAO Pre-Trained Models

  • Get the TAO Container
  • Get other purpose-built models from the NGC model registry:
    • TrafficCamNet
    • DashCamNet
    • FaceDetectIR
    • VehicleMakeNet
    • VehicleTypeNet
    • ActionRecognitionNet
    • PoseClassificationNet
    • ReIdentificationNet

License

The licenses to use this model is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses.

Technical Blogs

  • Access the latest in Vision AI model development workflows with NVIDIA TAO Toolkit 5.0
  • Improve accuracy and robustness of vision AI apps with Vision Transformers and NVIDIA TAO
  • Train like a ‘pro’ without being an AI expert using TAO AutoML
  • Create Custom AI models using NVIDIA TAO Toolkit with Azure Machine Learning
  • Develop and Deploy AI-powered Robots with NVIDIA Isaac Sim and NVIDIA TAO
  • Learn endless ways to adapt and supercharge your AI workflows with TAO - Whitepaper
  • Customize Action Recognition with TAO and deploy with DeepStream
  • Read the two-part blog on training and optimizing a 2D body-pose estimation model with TAO - Part 1 | Part 2
  • Learn how to train a real-time License plate detection and recognition app with TAO and DeepStream.
  • Model accuracy is extremely important; learn how you can achieve state of the art accuracy for classification and object detection models using TAO.

Suggested Reading

  • More information on TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
  • Refer to the TAO Toolkit documentation
  • Read the TAO Toolkit Quick Start Guide and release notes.
  • If you have any questions or feedback, see the discussions on the TAO Toolkit Developer Forums
  • Deploy your models for video analytics applications using the DeepStream SDK
  • Deploy your models in Riva for ConvAI use cases.

Ethical Considerations

NVIDIA platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developers to ensure that it meets the requirements for the relevant industry and use case, that the necessary instructions and documentation are provided to understand error rates, confidence intervals, and results, and that the model is being used under the conditions and in the manner intended.