A Siamese Network is a class of neural network architectures that contain two or more identical subnetworks. The training algorithm works by updating the parameters across all the sub-networks in tandem. It is used to find the similarity between the inputs by computing the Euclidean distance between the feature vectors. In this specific use case, the inputs are a "golden or reference" image and the image of the PCB component under inspection.
The model in this instance is an Siamese Network architecture.
This model was trained using the optical_inspection
entrypoint in TAO. The training algorithm optimizes the network to minimize the contrastive loss.
Siamese Network model was trained on a proprietary dataset with more than 42207 images of individual components extracted from 105 PCB boards and 4 different PCB designs. The training dataset consists of a mix of components (Resistors, Capacitors, Inductors, etc) from different PCBs.
Dataset | No. of images | No. of Components | No. of PCBs | No. of Board Designs |
---|---|---|---|---|
Nvidia Internal Dataset | 168828 | 42207 | 105 | 4 |
The dataset distribution is represented as under:
Dataset | No. of components | No of Defects | Defect Rate |
---|---|---|---|
Nvidia Internal Dataset | 42207 | 65 | 0.15% |
Dataset | No. of components | Types of LED illumination per component |
---|---|---|
Nvidia Internal Dataset | 42207 | 4 |
Following is an sample image showing a PASS component. The component images shown below were captured under 4 LED illuminations (Solder, Uniform, LowAngle and White). Images for the 4 LED lights were concatenated to display within 2 X 2 grid.
No Defect
Missing Component Defect
The model performance was evaluated on a validation dataset which had a total of 21148 components with 37 components being defective
The performance of the Optical Inspection model is mainly measured using the False Positive Rate (FPR) or False Alarm Rate. It is the proportion of PASS components incorrectly identified as DEFECTS for a given cutoff of the Siamese Score.
Model | Model Architecture | Testing Images | False Positive Rate (FPR) % | Defect Capture % | Score Cutoff |
---|---|---|---|---|---|
Optical Inspection | Siamese Network | 21148 | 0.97% | 100% | 0.3 |
Optical Inspection | Siamese Network | 21148 | 0.11% | 97% | 0.5 |
The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using trtexec
on Jetson AGX Xavier, Xavier NX, Orin, Orin NX and NVIDIA T4, and Ampere GPUs. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.
@TODO: Add perflab table
Model Arch | Version | Inference Resolution | Precision | Xavier NX | AGX Xavier | Orin NX | AGX Orin | T4 | A100 | A30 | A10 | A2 | ||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Siamese Network | Unpruned | 2x512x128x3 | FP16 | GPU | DLA1+DLA2 | GPU | DLA1+DLA2 | GPU | DLA1+DLA2 | GPU | DLA1+DLA2 | GPU | GPU | GPU | GPU | GPU |
- | - | - | - | - | - | - | - | - | - | - | - | - |
These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, or TensorRT.
The primary use case intended for these models is for optical inspection using RGB component level images. The model is a Siamese Network which outputs embedding vectors for image pairs to create a similarity score between them. By applying a euclidiean distance metric on the embedding vectors (golden and sample image pairs) an output score can be generated that indicates whether a component is defective or not.
These models are intended for training and fine-tune using TAO Toolkit and user datasets for image comparison. High-fidelity models can be trained to the new use cases. A Jupyter notebook is available as a part of the TAO container and can be used to re-train.
The models are also intended for easy edge deployment using TensorRT.
Two imput images:
Golden: RGB Image of dimensions: 512 X 128 X 3 (W x H x C)
Sample: RGB Image of dimensions: 512 X 128 X 3 (W x H x C)
Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (512), W = Width of the images (128)
Golden: Embedding: 1 X 5 (N x D)
Sample: Embedding: 1 X 5 (N x D)
Channel Ordering of the Input: NC, where N = Batch Size, D = Number of Dimensions.
Here is a sample images for a Capacitor with the golden and sample concatenated and displayed side-by-side using a 2 x 2 grid layout
To use these models as pretrained weights for transfer learning, please use the snippet below as template for the model
and train
component of the experiment spec file to train a Siamese Network model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide - Optical Inspection.
model:
model_type: Siamese
model_backbone: custom
embedding_vectors: 5
margin: 2.0
evaluate:
checkpoint: "${results_dir}/train/oi_model_epoch=004.pth"
The Siamese Network model was trained on RGB images using 4 LED lighting conditions namely Solder, Uniform, LowAngle and White lights. Therefore, images captured different lighting conditions or less than 4 LED illuminations may not provide good detection results.
License to use this model is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses
NVIDIA Optical Inspection model detects defects in objects using images. NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.