Image Classification is a popular computer vision technique in which an image is classified into one of the designated classes based on the image features. This model card contains pretrained weights of most of the popular classification models. These weights that may be used as a starting point with the classification app in Train Adapt Optimize (TAO) Toolkit to facilitate transfer learning. The models described in this card detect missing Printed Circuit Board (PCB) component defects using component level images extracted from a PCB.
The model in this instance is an image classification model based on GCViT architecture. The model is a classification network that was pre-trained on NVImageNet dataset and fine-tuned on a proprietary PCB (printed circuit board) dataset. The model uses the GCViT model backbone and a single head linear binary classifier.
NVImageNet is a commercially friendly image dataset whose category names are aligned with original ImageNet-1k's category names. The original ImageNet-1K limits to non-commercial use only. Many recent pretraining techniques shows the benefits of pretraining on ImageNet dataset first and then fine-tuning on downstream tasks. Thus, such pretrained model on original ImageNet may not be allowed to train the models for our products. Instead, our NVImageNet dataset is free to be used for commercial purpose, approved by our legal team. This dataset is collected from 84 websites which allows its images to be used commercially and Bing image search constrained with only returning results that are free to share and use commercially.
This model was trained using the classification_pyt
entrypoint in TAO. The training algorithm optimizes the network to minimize the cross-entropy loss.
PCBClassification model was trained on a proprietary dataset with more than 19600 images of individual components extracted from 71 PCB boards. The training dataset consists of a mix of components (Resistors, Capacitors, Inductors, etc) from different PCBs which can be present or absent (defective) in the images.
Dataset | Total # of images | Training images | Testing images |
---|---|---|---|
NV-PCB Internal Dataset | 19605 | 15573 | 2079 |
The dataset distribution is represented as under:
Dataset | Total # of images | # Component present images | # Component missing images |
---|---|---|---|
NV-PCB Internal Dataset | 19605 | 14226 | 5379 |
The components that are present in the image are additionally labelled with their component types divided into 11 categories. All these are treated as a single class (component-present) during training/evaluation. Missing components of all types have the same "missing" label.
The inference performance of PCBClassification model was measured against more than 2000 proprietary images. The component level images are varying resolution images padded/resized to 224x224 pixels before passing to the PCBClassification model.
The performance of the PCBClassification Model is mainly measured using accuracy, which is the proportion of correct predictions (all classes) made by the model out of all predictions.
Model | Model Architecture | Testing Images | Accuracy |
---|---|---|---|
PCBClassification Model | GCViT-xxTiny | 2079 | 0.99 |
The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using trtexec
. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.
GCViT-xxTiny (224x224 resolution)
Platform | BS | FPS |
---|---|---|
Jetson Orin Nano | 4 | 133.9 |
Orin NX 16GB | 4 | 198 |
AGX Orin 64GB | 16 | 560 |
A2 | 32 | 688 |
T4 | 16 | 1012 |
A30 | 32 | 3221 |
L4 | 8 | 2543 |
L40 | 16 | 6619 |
A100 | 128 | 7095 |
H100 | 128 | 12273 |
These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.
The primary use case intended for these models is detecting missing component defects in RGB PCB component level images. The model can be used to classify objects from photos and videos by using appropriate video or image decoding and pre-processing. The model is a binary classifier which predicts whether a component is present or missing.
These models are intended for training and fine-tune using TAO Toolkit and user datasets for image classification. High-fidelity models can be trained to the new use cases. A Jupyter notebook is available as a part of the TAO container and can be used to re-train.
The models are also intended for easy edge deployment using DeepStream SDK or TensorRT. DeepStream provides the facilities to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference.
RGB Image of dimensions: 224 X 224 X 3 (W x H x C)
Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (224), W = Width of the images (224)
The output is a binary label assigned to the image by the PCBClassification model. Category labels (2 classes - Missing/Present) for each input image containing a single component.
Channel Ordering of the Input: NC, where N = Batch Size, C = number of classes (2).
Here are two sample images representing a Capacitor - [Present(left), Missing(right)]:
To use these models as pretrained weights for transfer learning, please use the snippet below as template for the model
and train
component of the experiment spec file to train a GCViT Classification model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide - Image Classification PyT.
model:
init_cfg:
checkpoint: /path/to/the/gc_vit_xxtiny.pth
backbone:
type: gc_vit_xxtiny
custom_args:
use_rel_pos_bias: True
head:
type: LinearClsHead
To create an end-to-end video analytics application, deploy this model with DeepStream SDK. DeepStream SDK is a streaming analytics toolkit to accelerate deployment of AI-based video analytics applications. The model can be integrated directly into deepstream by following the instructions mentioned below.
To deploy these models with DeepStream 6.1, please follow the instructions below:
Download and install DeepStream SDK. The installation instructions for DeepStream are provided in DeepStream development guide.
/opt/nvidia/deepstream
is the default DeepStream installation directory. This path will be different if you are installing in a different directory.
See "Exporting The Model" chapter of TAO User Guide for more details on how to export a TAO model. After the model has been generated, two extra files are required which are provided in NVIDIA-AI-IOT.
Missing;Present
configs/multi_task_tao/pgie_multi_task_tao_config.txt
gpu-id=0
net-scale-factor=0.01735207357279195
offsets=123.675;116.28;103.53
model-color-format=0
labelfile-path=/path/to/label/file.txt
onnx-file=/path/to/onnx/model
batch-size=1
network-mode=2
interval=0
gie-unique-id=1
network-type=1
scaling-filter=1
scaling-compute-hw=1
classifier-threshold=0.5
Run ds-tao-classifier
:
ds-tao-classifier -c configs/multi_task_tao/pgie_multi_task_tao_config.txt -i file:///path/to/img.jpg
Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.
The PCBClassification model was trained on RGB images in good lighting conditions. Therefore, images captured in dark lighting conditions or a monochrome image or IR camera image may not provide good detection results.
The PCBClassification model was not trained on fish-eye lense cameras or moving cameras. Therefore, the models may not perform well for warped images and images that have motion-induced or other blur.
License to use this model is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses
NVIDIA PCBClassification model detects missing component defects using component level PCB images. NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.