Pre-trained SegFormer NvImageNet weights

Pre-trained SegFormer NvImageNet weights

Logo for Pre-trained SegFormer NvImageNet weights
Description
Pre-trained SegFormer weights trained on NvImageNet to facilitate transfer learning using TAO Toolkit.
Publisher
-
Latest Version
fan_large_hybrid_nvimagenet
Modified
October 16, 2023
Size
293.76 MB

TAO Commercial Pretrained FAN Classification Model

What is Train Adapt Optimize (TAO) Toolkit?

Train Adapt Optimize (TAO) Toolkit is a Python-based AI toolkit for customizing purpose-built pre-trained AI models with your own data. TAO Toolkit adapts popular network architectures and backbones to your data, allowing you to train, fine tune, prune, and export highly optimized and accurate AI models for edge deployment.

The pre-trained models accelerate the AI training process and reduce costs associated with large scale data collection, labeling, and training models from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection and more.

Build end-to-end services and solutions for transforming pixels and sensor data to actionable insights using the TAO DeepStream SDK and TensorRT. These models are suitable for object detection, classification, and segmentation.

Model Overview

The model described in this card can be used as a pre-trained starting weights for SegFormer semantic segmentation task. The weights were trained with Image classification pipe on Internal NVIDIA Imagenet dataset.

Model Architecture

FAN (Fully Attentional Network) is a transformer-based family of backbone from NVIDIA research that achieves SOTA in robustness against various corruptions. This family of backbone can easily generalize to new domains, be more robust to noise, blur etc. Key design behind FAN block is the attentional channel processing module that leads to robust representation learning. FAN can be used for image classification tasks as well as downstream tasks such as object detection and segmentation. Use FAN when your testing dataset has a domain gap from the training dataset.

Segformer is a real-time state of the art transformer based semantic segmentation model. SegFormer is a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders. It then predicts a class label for every pixel in the input image.

Training

This model was trained using the classification_pyt entrypoint in TAO. The training algorithm optimizes the network to minimize the cross-entropy loss.

Training Data

The FAN models were trained on the NVImageNet dataset.

Performance

Evaluation Data

We have tested the FAN models on the ImageNet1K validation dataset.

Methodology and KPI

The key performance indicator is the accuracy, following the standard evaluation protocol for image classification. The KPI for the evaluation data are reported below.

model top-1 accuracy
FAN-Base-Hybrid_nvimagenet 69.1
fan_large_hybrid_nvimagenet 68.3
FAN-Small-Hybrid_nvimagenet 68.38

Real-time Inference Performance

The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using trtexec on Jetson AGX Xavier, Xavier NX, Orin, Orin NX and NVIDIA T4, and Ampere GPUs. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.

FAN-B-H-384 (384 resolution)

Platform BS FPS
Jetson Orin Nano 4 16
Orin NX 16GB 4 23.4
AGX Orin 64GB 8 61.2
A2 8 55.5
T4 8 91
A30 16 260
L4 4 207
L40 4 558
A100 64 577
H100 64 985

FAN-L-H-384

Platform BS FPS
Jetson Orin Nano - -
Orin NX 16GB - -
AGX Orin 64GB - -
A2 8 38
T4 4 62
A30 8 179
L4 4 145
L40 4 366
A100 64 402
H100 64 681

How to Use This Model

These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.

The primary use case for these models is classifying objects in a color (RGB) image. The model can be used to classify objects from photos and videos by using appropriate video or image decoding and pre-processing.

These models are intended for training and fine-tune using TAO Toolkit and user datasets for image classification. High-fidelity models can be trained to the new use cases. A Jupyter notebook is available as a part of the TAO container and can be used to re-train.

The models are also intended for easy edge deployment using DeepStream SDK or TensorRT. DeepStream provides the facilities to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference.

Input

  • B X 3 X 224 X 224 (B C H W)

Instructions to Use Pretrained Models with TAO

To use these models as pretrained weights for transfer learning, please use the snippet below as template for the model and train component of the experiment spec file to train a FAN Classification model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

model:
  init_cfg:
    checkpoint: /path/to/the/fan_hybrid_base_nvimagenet.pth
  backbone:
    type: fan_base_16_p4_hybrid
    custom_args:
      use_rel_pos_bias: True
  head:
    type: LinearClsHead

Instructions to Deploy These Models with DeepStream

Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.

Limitations

FAN was trained on the NVImageNet dataset with 1000 object categories. Hence the model may not perform well on different data distributions, so we recommend conducting further finetuning on the target domain to get higher accuracy.

Model Versions

  • FAN-Base-Hybrid_nvimagenet - NVImageNet pre-trained FAN-Hybrid-Base model finetuned on ImageNet-1k.
  • fan_large_hybrid_nvimagenet - NVImageNet pre-trained FAN-Hybrid-Large model for finetune. (224 resolution)
  • FAN-Small-Hybrid_nvimagenet - NVImageNet pre-trained FAN-Hybrid-Small model for finetune. (224 resolution)

Reference

Citations

  • Zhou, Daquan, et al. "Understanding the robustness in vision transformers." International Conference on Machine Learning. PMLR, 2022.

Using TAO Pre-trained Models

License

The licenses to use this model is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses

Technical blogs

Suggested reading

Ethical Considerations

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developers to ensure that it meets the requirements for the relevant industry and use case, that the necessary instructions and documentation are provided to understand error rates, confidence intervals, and results, and that the model is being used under the conditions and in the manner intended.