The model described in this model card detects land cover semantic changes given remote sensing imagery (RSI). The inputs are a "golden or reference" image and the test image of the same land cover area under observation (captured between 1990 and 2010) and the output is a semantic change map denoting the semantic change between the two images.
Visual ChangeNet is a state of the art transformer-based Change Detection model. Visual ChangeNet is based on Siamese Network, which is a class of neural network architectures containing two or more identical subnetworks. The training algorithm works by updating the parameters across all the sub-networks in tandem. In TAO, Visual ChangeNet supports two images as input where the end goal is to either classify or segment the change between the "golden or reference" image and the "test" image. TAO supports the FAN backbone network for both Visual ChangeNet architectures. For more details about training FAN backbones, see the Pre-trained FAN based ImageNet Classification. In TAO, two different types of Change Detection networks are supported:
Visual ChangeNet-Classification is specifically intended for change classification. In this model card, the Visual ChangeNet-Segmentation model is leveraged to demonstrate land cover semantic change detection using LandSat-SCD dataset. The model uses a pretrained FAN backbone, trained on NVImageNet dataset, and then fine-tunes on the LandSat-SCD dataset.
This model was trained using the
visual_changenet entrypoint in TAO. The training algorithm optimizes the network to minimize the cross-entropy loss for every pixel of the mask.
Visual ChangeNet-Segmentation model was trained on open-source remote sensing semantic land change detection dataset called LandSat-SCD. The training dataset consists of 8468 images. LandSat-SCD is a land cover CD dataset that contains RS image pairs of resolution 416 × 416. They are randomly split into three parts to make train, val, and test sets of samples 6053, 1729 and 686 respectively.
|Dataset||No. of images|
Following is a sample image showing the pre and post change images along with the ground truth segmentation change maps.
The model performance was evaluated on a validation dataset that had a total of 686 images.
The performance of the Visual ChangeNet-Segmentation model for multi-class semactic change detection is measured using overall accuracy and average precision, recall and IoU score for all the classes.
|Model||Model Architecture||Testing Images||Precision||Recall||IoU||F1||Overall Accuracy|
|Visual ChangeNet-Segmentation||Siamese Network||686||88.64||85.9||77.88||87.15||95.77|
The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using
trtexec on Jetson AGX Xavier, Xavier NX, Orin, Orin NX and NVIDIA T4, and Ampere GPUs. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might vary depending on other bottlenecks in the hardware and software.
|Orin Nano 8GB||16||4.91|
|Orin NX 16GB||16||7.11|
|AGX Orin 64GB||16||18.25|
These models need to be used with NVIDIA hardware and software. For hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, or TensorRT.
The primary use case intended for these models is for Visual ChangeNet-Segmentation using RGB images. The model is a Siamese Network that outputs semantic change maps denoting pixel-level change between the two images.
These models are intended for training and fine-tune using TAO Toolkit and user datasets for image comparison. High-fidelity models can be trained to the new use cases. A Jupyter notebook is available as a part of the TAO container and can be used to re-train.
The models are also intended for edge deployment using TensorRT.
Two imput images:
Golden: RGB Image of dimensions: 416 X 416 X 3 (H x W x C)
Sample: RGB Image of dimensions: 416 X 416 X 3 (H x W x C)
Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (416), W = Width of the images (416)
Segmentation change map with the same resolution as the input images: 416 X 416 X 10 (H x W x C), where C = number of output change classes.
Here is a sample image for a pre and post change images along with ground-truth segmentation change map side-by-side.
To use these models as pretrained weights for transfer learning, use the snippet below as a template for the
train component of the experiment spec file to train a Siamese Network model. For more information on the experiment spec file, see the TAO Toolkit User Guide - Visual ChangeNet-Segmentation.
model: backbone: type: "fan_small_12_p4_hybrid" pretrained_backbone_path: null evaluate: model_path: "???"
The Visual ChangeNet-Segmentation Network model was trained on pair-wise co-registered RS imagery and might not perform well for mis-aligned image pairs not captured using RSI.
License to use this model is covered by CCBY 4.0. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses.
NVIDIA Visual ChangeNet-Segmentation model detects changes between pair-wise images. NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.