NGC Catalog
CLASSIC
Welcome Guest
Models
Visual ChangeNet-Seg with FM Backbone - ChangeSim

Visual ChangeNet-Seg with FM Backbone - ChangeSim

For downloads and more information, please view on a desktop device.
Logo for Visual ChangeNet-Seg with FM Backbone - ChangeSim
Associated Products
Features
Description
Visual ChangeNet-Segmentation with Foundation Model Backbone on ChangeSim for indoor warehouse change detection.
Publisher
NVIDIA
Latest Version
visual_changenet_dinov2_changesim_trainable_v1.0
Modified
October 2, 2024
Size
3.81 GB

Visual ChangeNet-Segmentation with Foundation Model Backbone - ChangeSim (Commercial)

Model Overview

The Visual ChangeNet-Segmentation Model (ChangeSim) detects changes in industrial indoor environment (warehouse) from a pair of co-registered warehouse images. This model is ready for commercial use.

References:

  • Park, Jin-Man, et al. "Changesim: Towards end-to-end online scene change detection in industrial indoor environments." 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021.
  • Bandara, Wele Gedara Chaminda, and Vishal M. Patel. "A transformer-based siamese network for change detection." IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2022.

Model Architecture:

Architecture Type: Transformer-Based
Network Architecture: Siamese Network

Visual ChangeNet is a state of the art transformer-based change detection model. Visual ChangeNet is based on Siamese Network, which is a class of neural network architectures containing two or more identical subnetworks. The training algorithm works by updating the parameters across all the sub-networks in tandem. In TAO, Visual ChangeNet supports two images as input where the end goal is to classify or segment the change between the "golden or reference" image and the "test" image. More specifically, this model was trained with the NV DINOv2 backbone, which was trained in a self-supervised manner on NVIDIA proprietary data and achieved SOTA accuracy on zero-shot ImageNet classification. To enable the ViT backbone in Visual ChangeNet, the ViT-Adapter was used as the neck architecture. The ViT-Adapter improves the accuracy on dense predictions, such as object detection and segmentation. In TAO, two different types of change detection networks are supported:

  • Visual ChangeNet-Segmentation - for segmentation of change between two input images.
  • Visual ChangeNet-Classification - for classification of change between two input images.

Visual ChangeNet-Segmentation is specifically intended for change segmentation. In this model card, the Visual ChangeNet-Segmentation model is leveraged to demonstrate warehouse change detection using ChangeSim dataset for indoor warehouse change detection. The model uses a pretrained NV DINOv2 backbone, trained on the NVIDIA-commercial dataset, and then fine-tuned on the ChangeSim dataset.

Input:

Input Type(s): Images
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: Three-Dimensional (3D)
Other Properties Related to Input:
Two input images:

  • Golden: RGB Image of dimensions: 512 X 512 X 3 (H x W x C)
  • Sample: RGB Image of dimensions: 512 X 512 X 3 (H x W x C)

Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (512), W = Width of the images (512)

Here is a sample image for a pre and post change images along with ground-truth segmentation change map side-by-side.

Output:

Output Type(s): Segmentation Change Map
Output Format: 3D Vector
Other Properties Related to Output:
Segmentation change map with the same resolution as the input images: 512 X 512 X 5 (H x W x C), where C = number of output change classes.

Software Integration:

Runtime Engine(s):

  • TAO - 5.2

Supported Hardware Architecture(s):

  • NVIDIA Ampere
  • NVIDIA Jetson
  • NVIDIA Hopper
  • NVIDIA Lovelace
  • NVIDIA Pascal
  • NVIDIA Turing
  • NVIDIA Volta

Supported Operating System(s):

  • Linux
  • Linux 4 Tegra

Model Version(s):

  • visual_changenet_dinov2_changesim_trainable_v1.0 - NV DINOv2 Visual ChangeNet-Segmentation model ChangeSim is trainable.
  • visual_changenet_dinov2_changesim_deployable_v1.0 - NV DINOv2 Visual ChangeNet-Segmentation model ChangeSim is deployable to DeepStream.

Training & Evaluation:

This model was trained using the visual_changenet entrypoint in TAO. The training algorithm optimizes the network to minimize the cross-entropy loss for every pixel of the mask.

Using this Model

These models need to be used with NVIDIA hardware and software. For hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with the Train Adapt Optimize (TAO) Toolkit, or TensorRT.

The primary use case for these models is for Visual ChangeNet-Segmentation using RGB images. The model is a Siamese Network that outputs semantic change maps denoting pixel-level change between the two images.

These models are intended for training and fine-tuning using the TAO Toolkit and your datasets for image comparison. High-fidelity models can be trained on new use cases. A Jupyter Notebook is available as a part of the TAO container and can be used to re-training.

The models are also intended for edge deployment using TensorRT.

Using the Model with TAO

To use these models as pretrained weights for transfer learning, use the following as a template for the model and train component of the experiment spec file to train a Siamese Network model. For more information on the experiment spec file, see the TAO Toolkit User Guide - Visual ChangeNet-Segmentation.

model:
  backbone:
    type: "vit_large_nvdinov2"
    pretrained_backbone_path: null
    freeze_backbone: False

Training Dataset:

Data Collection Method by dataset:

  • Synthetic

Labeling Method by dataset:

  • Synthetic

Properties:
Open-source ChangeSim dataset collected in photo-realistic simulation environments with the presence of environmental non-targeted variations, such as air turbidity and light condition changes, as well as targeted object changes in industrial indoor environments. It is an indoor warehouse CD dataset that contains pre and post change image pairs of resolution 640 × 480 (W x H). We resize these images to size 512 × 512. The dataset is split into two parts to make training and evaluation sets of samples 13,225 and 8,212 respectively.

Dataset No. of Images
ChangeSim 21,437

Evaluation Dataset:

Data Collection Method by dataset:

  • Synthetic

Labeling Method by dataset:

  • Synthetic

Properties:
Open-source ChangeSim warehouse change detection dataset of 8,212 images.

Methodology and KPI

The performance of the Visual ChangeNet-Segmentation model for multi-class semantic change detection is measured using overall accuracy, average precision, average recall, and avergae IoU score for all the classes.

Model Model Architecture Testing Images Precision Recall IoU F1 Overall Accuracy
Visual ChangeNet-Segmentation Siamese Network 8212 57 45 36.5 48.1 92.83

To compare with the metrics reported by ChangeSim, here are the IoU scores for individual change classes for the above model:

Model Model Architecture Testing Images M N Re Ro S mIoU
Visual ChangeNet-Segmentation Siamese Network 8212 16 19.76 19.64 33.6 93.5 36.5

Here M, N, Re, Ro and S represent the IoU score for the 5 classes in ChangeSim change detection (Missing, New, Replaced, Rotated, Static).

Inference:

Engine: Tensor(RT)
Test Hardware:

  • A2
  • A30
  • DGX H100
  • DGX A100
  • DGX H100
  • JAO 64GB
  • Jetson AGX Xavier
  • L4
  • L40
  • NVIDIA T4
  • Orin
  • Orin Nano 8GB
  • Orin NX
  • Orin NX16GB
  • T4
  • Xavier NX

The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using trtexec on Jetson AGX Xavier, Xavier NX, Orin, Orin NX and NVIDIA T4, and Ampere GPUs. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might vary depending on other bottlenecks in the hardware and software.

NVDINOv2 + ViT-Adapter + Visual ChangeNet

Platform BS FPS
Orin NX 16GB 16 1.5
AGX Orin 64GB 16 9.41
A2 8 5.9
T4 16 2.29
L4 16 4.68
A30 16 35.8
L40 16 11.3
A100 32 10.8
H100 32 23.5

Technical Blogs

  • Learn how to transform Industrial Defect Detection with NVIDIA TAO and Vision AI Models.
  • Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2.
  • Learn how to train real-time license plate detection and recognition app with TAO and DeepStream.
  • Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO.
  • Learn how to train the Instance segmentation model using MaskRCNN with TAO.
  • Read the technical tutorial on how PeopleNet model can be trained with custom data using Transfer Learning Toolkit.
  • Learn how to train and deploy real-time intelligent video analytics apps and services using DeepStream SDK.

Suggested Reading

  • More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone.
  • Read the TAO Quick Start guide and release notes.
  • If you have any questions or feedback, see the discussions on TAO Toolkit Developer Forums.
  • Deploy your model on the edge using DeepStream. Learn more about DeepStream SDK.

Limitations

Expecting Co-Registered Imagery

The Visual ChangeNet-Segmentation network model was trained on pairwise indoor warehouse imagery and might not perform well for misaligned image pairs not intended for warehouse scene change detection.

License

The license to use the model is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses.

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Promise and the Explainability, Bias, Safety & Security, and Privacy Subcards.