NVCLIP is an NVIDIA version of the "Contrastive Language-Image Pre-Training (CLIP)" model that transforms an images into three dimensional (3D) textual embeddings. This model is ready for commercial/non-commercial use.
Architecture Type: Transformer-Based
In TAO, you can use the NVCLIP in conjunction with TAO-MMclassification.
NVCLIP as a backbone can be used towards various downstream tasks such as classification, detection, segmentation and text based image retrieval.
Input Type(s): Images
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: Three-Dimensional (3D)
Other Properties Related to Input:
Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (336), W = Width of the images (336)
Output Type(s): Embedding - Float tensor
Output Format: 3D Vector
Other Properties Related to Output:
The output of this model is an embedding of an input image of size 1024 for ViT-H variant and 768 for ViT-L.
Runtime Engine(s):
Supported Hardware Architecture(s):
Supported Operating System(s):
This model can be used as a backbone and trained using the classification_pyt
entrypoint in TAO. The training algorithm does a linear probe finetuning for classification task.
These models need to be used with NVIDIA hardware and software. For hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with the Train Adapt Optimize (TAO) Toolkit, or TensorRT.
The primary use case for these models is getting feature embeddings from images. These embeddings can then be used for curation, clustering, zero-shot or few-shot downstream tasks such as classification. These embeddings can also be used towards text based image retrieval.
These models are intended for training and fine-tuning using the TAO Toolkit and your datasets for image comparison. High-fidelity models can be trained on new use cases. A Jupyter Notebook is available as a part of the TAO container and can be used to re-training.
The models are also intended for edge deployment using TensorRT.
To use these models as pretrained weights for transfer learning, use the following as a template for the model
and train
component of the experiment spec file to train a NVCLIP model. For more information on the experiment spec file, see the TAO Toolkit User Guide - NVCLIP.
model:
backbone:
type: "open_clip"
custom_args:
model_name: "ViT-L-14-SigLIP-CLIPA-336"
freeze: true
init_cfg:
checkpoint: "Path to the checkpoint"
Data Collection Method by dataset:
Labeling Method by dataset:
Properties:
Dataset | No. of Images |
---|---|
NV Internal Data | 700M |
Link: https://www.image-net.org/
Data Collection Method by dataset:
Labeling Method by dataset:
Properties:
50,000 validatio images from ImageNet dataset
The performance of zero shot accuracy of NVCLIP on ImageNet validation dataset.
model | top-1 Accuracy |
---|---|
ViT-H-336 | 0.7786 |
ViT-L-336 | 0.7629 |
Engine: Tensor(RT)
Test Hardware:
The inference is run on the provided unpruned model at FP16 precision. The inference performance is run using trtexec
on Jetson AGX Xavier, Xavier NX, Orin, Orin NX and NVIDIA T4, and Ampere GPUs. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might vary depending on other bottlenecks in the hardware and software.
NVCLIP ViT-H
Platform | BS | FPS |
---|---|---|
A2 | 128 | 34.88 |
L4 | 128 | 107.80 |
A30 | 128 | 230.04 |
L40 | 128 | 286.69 |
A100 | 128 | 466.98 |
H100 | 128 | 782.47 |
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Promise and the Explainability, Bias, Safety & Security, and Privacy Subcards.