Mask Auto Label

Mask Auto Label

Logo for Mask Auto Label
Description
Pretrained model to generate semantic segmentation labels.
Publisher
NVIDIA
Latest Version
trainable_v1.0
Modified
October 16, 2023
Size
1.04 GB

TAO Pretrained Mask Auto Labeler (MAL)

What is Train Adapt Optimize (TAO) Toolkit?

Train Adapt Optimize (TAO) Toolkit is a Python-based AI toolkit for taking purpose-built pre-trained AI models and customizing them with your own data. TAO adapts popular network architectures and backbones to your data, allowing you to train, fine tune, prune, and export highly optimized and accurate AI models for edge deployment.

Pre-trained models accelerate the AI training process and reduce costs associated with large scale data collection, labeling, and training models from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection, and more.

Build end-to-end services and solutions for transforming pixels and sensor data to actionable insights using TAO DeepStream SDK and TensorRT. These models are suitable for object detection, classification, and segmentation.

Model Overview

Mask Auto-Labeler (MAL) is a high-quality Transformer-based mask auto-labeling framework for instance segmentation using only box annotations. MAL takes box-cropped images as inputs and conditionally generates their mask pseudo-labels. This model card contains pre-trained weights for MAL trained on the COCO dataset to facilitate transfer learning through TAO Toolkit.

Model Architecture

The model in this instance is an instance segmentator that takes color (RGB) images and bounding boxes as inputs and generates segmentation masks as outputs. The backbone feature extractor of this model is ViT-MAE-Base that were pre-trained on the ImageNet dataset.

Training

This model was trained using the MAL entrypoint in TAO. The training algorithm optimizes the network to minimize the Multiple Instance Learning (MIL) loss and the Conditional Random Field (CRF) loss for the objects.

Training Data

MAL was trained on the COCO 2017 dataset. The COCO dataset contains 118K training images and 5K validation images and corresponding annotation files. The annotation contains bounding boxes for 80 object categories.

Performance

Evaluation Data

We test the MAL model on the COCO 2017 validation dataset.

Methodology and KPI

The key performance indicator is the mean Intersection over Union (mIoU) following the standard evaluation protocol for segmentation. The KPI for the evaluation data are reported below.

model precision mIoU
mal_vit_base FP32 0.788

How to Use This Model

These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU. These models can only be used with TAO Toolkit.

The primary use case for these models is weakly supervised instance segmentation and auto-labeling.

It is intended for training and fine-tune using Train Adapt Optimize (TAO) Toolkit and the users' dataset of object detection. High fidelity models can be trained to new use cases. A Jupyter notebook is available as a part of TAO container and can be used to re-train.

Input

  • Image: B X 3 X 512 X 512 (B C H W)
  • Label: Bounding box groundtruth in COCO format

Output

Segmentation mask for each detected objects in the input image.

Instructions to Use Pretrained Models with TAO

To use these models as pretrained weights for transfer learning, use the snippet below as atemplate for the model and train component of the experiment spec file to train a Deformable DETR model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

checkpoint: /path/to/vit-mae-base-16epoch=10.pth
model:
  arch: 'vit-mae-base/16'

Limitations

MAL was trained on the COCO dataset with 80 object categories. Hence, the model may not perform well on data from completely different domains. We recommend further fine tuning on the target domain to get higher mIoU.

Model Versions

  • mal_vit_base_trainable_v1.0 - Pre-trained ViT-base MAL model for finetune.

Reference

Citations

  • Lan, S., Yang, X., Yu Z., Wu Z., Alvarez J.M., Anandkumar A.: Vision Transformers Are Good Mask Auto-Labelers
  • He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition
  • Hatamizadeh, A., Yin, H., Heinrich, G., Kautz, J., Molchanov, P.: Global Context Vision Transformers
  • Lin, T., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.: Microsoft COCO: Common Objects in Context

Using TAO Pre-trained Models

License

This work is licensed under the Creative Commons Attribution NonCommercial ShareAlike 4.0 License (CC-BY-NC-SA-4.0). To view a copy of this license, please visit this link, or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

Technical Blogs

Suggested Reading

Ethical Considerations

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developers to ensure that it meets the requirements for the relevant industry and use case, that the necessary instructions and documentation are provided to understand error rates, confidence intervals, and results, and that the model is being used under the conditions and in the manner intended.