Visual ChangeNet Classification

Visual ChangeNet Classification

Logo for Visual ChangeNet Classification
Description
Visual ChangeNet - Classification Models
Publisher
-
Latest Version
visual_changenet_nvpcb_trainable_v1.0
Modified
October 16, 2023
Size
270.12 MB

Visual ChangeNet-Classification Model Card

Model Overview

The model described in this model card detects defective PCB components given component level PCB images. The inputs are a "golden or reference" image and the image of the PCB component under inspection and the output is a binary classification label denoting 'defect' or 'no-defect'.

Model Architecture

Visual ChangeNet is a state of the art transformer-based Change Detection model. Visual ChangeNet is based on Siamese Network, which is a class of neural network architectures containing two or more identical subnetworks. The training algorithm works by updating the parameters across all the sub-networks in tandem. In TAO, Visual ChangeNet supports two images as input where the end goal is to either classify or segment the change between the "golden or reference" image and the "test" image. TAO supports the FAN backbone network for both Visual ChangeNet architectures. For more details about training FAN backbones, please refer to the Pre-trained FAN based ImageNet Classification.In TAO, two different types of Change Detection networks are supported:

  • Visual ChangeNet-Segmentation - for segmentation of change between to input images.
  • Visual ChangeNet-Classification - for classification of change between to input images.

Visual ChangeNet-Classification is specifically intended for change classification. In this model card, the Visual ChangeNet-Classification model is leveraged to demonstrate PCB component Inspection.

Training

This model was trained using the visual_changenet entrypoint in TAO. The training algorithm optimizes the network to minimize the contrastive loss or cross-entropy loss depending on which architecture for Visual ChangeNet-Classification. The Visual ChangeNet-Classification supports two architectures:

  • Architecture 1: Leverages last feature map output from the FAN backbone and computes a Euclidean distance between features for the test and golden image features to optimize the contrastive loss.
  • Architecture 2: Leverages feature maps from 4 different transformer layers from the FAN backbone to optimize a learnable difference using the MLP decoder using a cross-entropy loss.

Training Data

Visual ChangeNet-Classification model was trained on a proprietary dataset with more than 42207 images of individual components extracted from 105 PCB boards and 4 different PCB designs. The training dataset consists of a mix of components (Resistors, Capacitors, Inductors, etc) from different PCBs.

Dataset No. of images No. of Components No. of PCBs No. of Board Designs
Nvidia Internal Dataset 168828 42207 105 4

The dataset distribution is represented as under:

Dataset No. of components No of Defects Defect Rate
Nvidia Internal Dataset 42207 65 0.15%
Dataset No. of components Types of LED illumination per component
Nvidia Internal Dataset 42207 4

Following is an sample image showing a PASS component. The component images that follow were captured under 4 LED illuminations (Solder, Uniform, LowAngle, and White). Images for the 4 LED lights were concatenated to display within 2 X 2 grid.

No Defect

Missing Component Defect

Performance

Evaluation Data

The model performance was evaluated on a validation dataset which had a total of 21148 components with 37 components being defective

Methodology and KPI

The performance of the Visual ChangeNet-Classification model is mainly measured using the False Positive Rate (FPR) or False Alarm Rate. It is the proportion of PASS components incorrectly identified as DEFECTS for a given cutoff of the Siamese Score.

Model Model Architecture Testing Images False Positive Rate (FPR) % Defect Capture % Score Cutoff
Visual ChangeNet-Classification Siamese Network 21148 0.3% 97.4% 0.03

Real-time Inference Performance

The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using trtexec on Jetson AGX Xavier, Xavier NX, Orin, Orin NX and NVIDIA T4, and Ampere GPUs. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might vary depending on other bottlenecks in the hardware and software.

Platform BS FPS
Orin Nano 8GB 16 30.97
Orin NX 16GB 16 44.74
AGX Orin 64GB 16 113.20

Using this Model

These models need to be used with NVIDIA hardware and software. For hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, or TensorRT.

The primary use case intended for these models is for Visual ChangeNet-Classification using RGB component level images. The model is a Siamese Network that outputs embedding vectors for image pairs to create a similarity score between them. By applying a Euclidiean distance metric or learnable distance module between golden and sample image pairs, an output score can be generated that indicates whether a component is defective or not.

These models are intended for training and fine-tune using TAO Toolkit and user datasets for image comparison. High-fidelity models can be trained to the new use cases. A Jupyter notebook is available as a part of the TAO container and can be used to re-train.

The models are also intended for edge deployment using TensorRT.

Input

Two imput images:

Golden: RGB Image of dimensions: 512 X 128 X 3 (H x W x C)

Sample: RGB Image of dimensions: 512 X 128 X 3 (H x W x C)

Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (512), W = Width of the images (128)

Output

Classification score for the two input images. A threshold is specified to classify each as change vs no-change.

Input image

The following is a sample image for a Capacitor with the golden and sample concatenated and displayed side-by-side using a 2 x 2 grid layout.

Using the Model with TAO

To use these models as pretrained weights for transfer learning, use the snippet below as a template for the model and train component of the experiment spec file to train a Siamese Network model. For more information on the experiment spec file, see the TAO Toolkit User Guide - Visual ChangeNet-Classification.

model:
  backbone:
    type: "fan_small_12_p4_hybrid"
    pretrained_backbone_path: null
  classify:
    train_margin_euclid: 2.0
    eval_margin: 0.005
    embedding_vectors: 5
    embed_dec: 30
    difference_module: 'learnable'
    learnable_difference_modules: 4
evaluate:
    model_path: "???"

Limitations

Expecting 4 LED Illuminations

The Visual ChangeNet-Classification model was trained on RGB images using 4 LED lighting conditions namely Solder, Uniform, LowAngle and White lights. Therefore, images captured different lighting conditions or less than 4 LED illuminations might not provide good detection results.

Model Versions

  • changenet_nvpcb_solderlight_trainable_v1.0 - FAN-Hybrid Small Visual ChangeNet-Classification model using one lighting condition (Solder Light) trainable.
  • changenet_nvpcb_solderlight_deployable_v1.0 - FAN-Hybrid Small Visual ChangeNet-Classification model using one lighting condition (Solder Light) deployable to deepstream.
  • changenet_nvpcb_trainable_v1.0 - FAN-Hybrid Small Visual ChangeNet-Classification model using one lighting condition trainable.
  • changenet_nvpcb_deployable_v1.0 - FAN-Hybrid Small Visual ChangeNet-Classification model using 4 lighting condition deployable to deepstream.

References

Using TAO Pre-trained Models

Technical Blogs

Suggested reading

License

License to use this model is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses

Ethical Considerations

NVIDIA Visual ChangeNet-Classification model detects defects in objects using images. NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.