NGC Catalog
CLASSIC
Welcome Guest
Models
Visual ChangeNet Segmentation - MvTEC

Visual ChangeNet Segmentation - MvTEC

For downloads and more information, please view on a desktop device.
Logo for Visual ChangeNet Segmentation - MvTEC
Description
Change segmentation model.
Publisher
-
Latest Version
trainable_v1.0
Modified
December 12, 2023
Size
621.44 MB

Visual ChangeNet-Segmentation Model Card - MVTec-AD (Research-only)

Model Overview

The model described in this model card is designed for the segmentation of industrial defects using the MVTec-AD dataset. The inputs consist of a pair of co-registered images, namely a golden image and a test image, both obtained from the MVTec-AD dataset. These images capture the same component, that is, a transistor. The model aims to output a binary segmentation change map that highlights the presence of defects in the post-change test image by comparing it to the corresponding golden image.

Model Architecture

Visual ChangeNet is a state of the art transformer-based change detection model. Visual ChangeNet is based on the Siamese Network, which is a class of neural network architectures containing two or more identical subnetworks. The training algorithm works by updating the parameters across all the sub-networks in tandem. In TAO, Visual ChangeNet supports two images as input where the end goal is to either classify or segment the change between the "golden or reference" image and the "test" image. TAO supports the FAN backbone network for both Visual ChangeNet architectures. For more details about training FAN backbones, see the Pre-trained FAN based ImageNet Classification. In TAO, two different types of change detection networks are supported:

  • Visual ChangeNet-Segmentation - for segmentation of change between two input images.
  • Visual ChangeNet-Classification - for classification of change between two input images.

Visual ChangeNet-Segmentation is specifically intended for change segmentation. In this model card, the Visual ChangeNet-Segmentation model is leveraged to demonstrate industrial defect detection using MVTec-AD dataset. The model uses a pretrained FAN backbone, trained on NVImageNet dataset, and then fine-tunes on the MVTec-AD dataset for the transistor class.

Training

This model was trained using the visual_changenet entrypoint in TAO. The training algorithm optimizes the network to minimize the cross-entropy loss for every pixel of the mask.

Training Data

Visual ChangeNet-Segmentation model was trained on open-source MVTec-AD dataset. The MVTec Anomaly dataset provides images of 15 different objects. For each object, the dataset contains non-defective images (golden images) and defective images with a pixel perfect segmentation mask of the defect region.

Dataset No. of Images
MVTec-AD (transistor) 303

The dataset is randomly split into 212 training and 91 validation splits. The dataset is additionally pre-processed to prepare in the correct format as expected by the Visual ChangeNet-Segmentation model. For more details on the dataset pre-processing, see the Visual ChangeNet-Segmentation Notebook (MVTec).

The following is a sample image showing a changed scenario:

Performance

Evaluation Data

The model performance was evaluated on a validation dataset, which had a total of 91 images.

Methodology and KPI

The performance of the Visual ChangeNet-Segmentation model is mainly measured using overall accuracy and precision, recall and IoU score for the change class.

Model Model Architecture Testing Images Precision Recall IoU F1 Overall Accuracy
Visual ChangeNet-Segmentation Siamese Network 91 94.5 89.8 86.15 92 99.57

Real-Time Inference Performance

The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using trtexec on Jetson AGX Xavier, Xavier NX, Orin, Orin NX and NVIDIA T4, and Ampere GPUs. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might vary depending on other bottlenecks in the hardware and software.

Platform BS FPS
Orin Nano 8GB 16 15.19
Orin NX 16GB 16 21.92
AGX Orin 64GB 16 55.07
A2 16 36.02
T4 16 59.7
L4 8 131.48
A30 16 204.12
L40 8 364
A100 32 435.18
H100 32 841.68

How to Use this Model

These models need to be used with NVIDIA hardware and software. For hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, or TensorRT.

The primary use case intended for these models is for Visual ChangeNet-Segmentation using RGB component level images. The model is a Siamese Network, which outputs binary change maps denoting pixel-level change between the two images, denoting the defects.

These models are intended for training and fine-tune using the TAO Toolkit and your datasets for image comparison. High-fidelity models can be trained on new use cases. A Jupyter Notebook is available as a part of the TAO container and can be used for re-training. The Notebook can be found at: Visual ChangeNet-Segmentation Notebook (MVTec).

The models are also intended for edge deployment using TensorRT.

Input

Two input images that meet the following characteristics:

Golden Image: RGB Image of dimensions: 256 X 256 X 3 (H x W x C)

Test Image: RGB Image of dimensions: 256 X 256 X 3 (H x W x C)

Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (256), W = Width of the images (256)

Output

Segmentation change map with the same spatial resolution as the input images: 256 X 256 X 2 (H x W x C), where C = number of output change classes.

Input Image

The following is a sample image for golden and test images along with ground-truth segmentation change map side-by-side denoting the defect:

Using the Model with TAO

To use these models as pretrained weights for transfer learning, use the the following snippet of code as a template for the model and train components of the experiment spec file to train a Siamese Network model. For more information on the experiment spec file, see the TAO Toolkit User Guide - Visual ChangeNet-Segmentation.

model:
  backbone:
    type: "fan_small_12_p4_hybrid"
    pretrained_backbone_path: null
evaluate:
    model_path: "???"

Limitations

Expecting Co-Registered Remote Sensing Imagery

The Visual ChangeNet-Segmentation Network model was trained on pair-wise co-registered images for the transistor class and might not perform well for mis-aligned image pairs or for a different object types.

Model Versions

  • trainable_v1.0 - FAN-Hybrid Base Visual ChangeNet-Segmentation model MVTec trainable.
  • deployable_v1.0 - FAN-Hybrid Base Visual ChangeNet-Segmentation model MVTec deployable to deepstream.

References

Using TAO Pre-Trained Models

  • Get TAO Container
  • Get other purpose-built models from the NGC model registry:
    • TrafficCamNet
    • PeopleNet
    • PeopleNet
    • PeopleNet-Transformer
    • DashCamNet
    • FaceDetectIR
    • VehicleMakeNet
    • VehicleTypeNet
    • PeopleSegNet
    • PeopleSemSegNet
    • License Plate Detection
    • License Plate Recognition
    • Gaze Estimation
    • Facial Landmark
    • Heart Rate Estimation
    • Gesture Recognition
    • Emotion Recognition
    • FaceDetect
    • 2D Body Pose Estimation
    • ActionRecognitionNet
    • ActionRecognitionNet
    • PoseClassificationNet
    • People ReIdentification
    • PointPillarNet
    • CitySegFormer
    • Retail Object Detection
    • Retail Object Embedding
    • Optical Inspection
    • Optical Character Detection
    • Optical Character Recognition
    • PCB Classification
    • PeopleSemSegFormer
    • LPDNet
    • License Plate Recognition
    • Gaze Estimation
    • Facial Landmark
    • Heart Rate Estimation
    • Gesture Recognition
    • Emotion Recognition
    • FaceDetect
    • 2D Body Pose Estimation
    • ActionRecognitionNet
    • ActionRecognitionNet
    • PoseClassificationNet
    • People ReIdentification
    • PointPillarNet
    • CitySegFormer
    • Retail Object Detection
    • Retail Object Embedding
    • Optical Inspection
    • Optical Character Detection
    • Optical Character Recognition
    • PCB Classification
    • PeopleSemSegFormer

Technical Blogs

  • Learn how to transform Industrial Defect Detection with NVIDIA TAO and Vision AI Models
  • Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
  • Learn how to train real-time License plate detection and recognition app with TAO and DeepStream.
  • Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO
  • Learn how to train Instance segmentation model using MaskRCNN with TAO
  • Read the technical tutorial on how PeopleNet model can be trained with custom data using Transfer Learning Toolkit
  • Learn how to train and deploy real-time intelligent video analytics apps and services using DeepStream SDK

Suggested Reading

  • More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
  • Read the TAO Quick Start guide and release notes.
  • If you have any questions or feedback, see the discussions on TAO Toolkit Developer Forums
  • Deploy your model on the edge using DeepStream. Learn more about DeepStream SDK

License

License to use this model is covered by CC-BY-NC-SA-4.0. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses.

Ethical Considerations

NVIDIA Visual ChangeNet-Segmentation model detects changes between pair-wise images. NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.