NGC Catalog
CLASSIC
Welcome Guest
Models
TAO Pretrained DetectNet V2

TAO Pretrained DetectNet V2

For downloads and more information, please view on a desktop device.
Logo for TAO Pretrained DetectNet V2
Description
Pretrained weights to facilitate transfer learning using TAO Toolkit.
Publisher
NVIDIA
Latest Version
resnet34
Modified
August 19, 2024
Size
170.65 MB

TAO Pretrained DetectNet V2

Model Overview

Description:

DetectNetV2 recognizes the individual objects in an image. This model is ready for commercial use.

References:

Other TAO Pre-trained Models

  • Get TAO Object Detection pre-trained models for YOLOV4, YOLOV3, FasterRCNN, SSD, DSSD, and RetinaNet architectures from NGC model registry

  • Get TAO EfficientDet Object Detection pre-trained models for DetectNet_v2 architecture from NGC model registry

  • Get TAO classification pre-trained models from NGC model registry

  • Get TAO Instance segmentation pre-trained models for MaskRCNN architecture from NGC

  • Get TAO Semantic segmentation pre-trained models for UNet architecture from NGC

  • Get Purpose-built models from NGC model registry:

    • PeopleNet
    • TrafficCamNet
    • DashCamNet
    • FaceDetectIR
    • VehicleMakeNet
    • VehicleTypeNet
    • PeopleSegNet
    • PeopleSemSegNet
    • License Plate Detection
    • License Plate Recognition
    • Facial Landmark
    • FaceDetect
    • 2D Body Pose Net
    • ActionRecognitionNet

Model Architecture:

Architecture Type: Convolution Neural Network (CNN)
Network Architecture: DetectNet_v2

This model card contains pretrained weights that may be used as a starting point with the DetectNet_v2 object detection networks in Train Adapt Optimize (TAO) Toolkit to facilitate transfer learning.

Input:

Input Type(s): Image
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: 3D
Other Properties Related to Input: Minimum Resolution: B X 3 X 224 X 224; Maximum Resolution: B X 3 X 518 X 518; No minimum bit depth, alpha, or gamma

Output:

Output Type(s): Label(s), Bounding-Box(es), Confidence Scores
Output Format: Label: Text String(s); Bounding Box: (x-coordinate, y-coordinate, width, height), Confidence Scores: Floating Point
Other Properties Related to Output: Category Label(s): (Labels of object detected), Bounding Box Coordinates, Confidence Scores

Software Integration:

Runtime Engine(s):

  • TAO - 5.2
  • DeepStream 6.1 or later

Supported Hardware Architecture(s):

  • Ampere
  • Jetson
  • Hopper
  • Lovelace
  • Pascal
  • Turing
  • Volta

Supported Operating System(s):

  • Linux
  • Linux 4 Tegra

Model Version(s):

  • resnet10/resnet18/resnet34/resnet50
  • vgg16/vgg19
  • googlenet
  • mobilenet_v1/mobilenet_v2
  • squeezenet
  • darknet19/darknet53

Note: These are unpruned models with just the feature extractor weights, and may not be used without re-training in an object detection application

Note: When using the ResNet34 model, please set the all_projections field in the model_config to False. For more information about this parameter, please refer to the TAO Getting Started Guide.

Note: The pre-trained weights in this model are only for DetectNet_v2 object detection networks and shouldn't be used for YOLOV3, RetinaNet, FasterRCNN, SSD and DSSD based object detection models. For pre-trained weights for those models, click here

Training & Evaluation:

Training Dataset:

Link: https://github.com/openimages/dataset/blob/main/READMEV3.md
Data Collection Method by dataset:

  • Unknown

Labeling Method by dataset:

  • Unknown

Properties:
Roughly 400,000 images and 7,000 validation images across thousands of classes as defined by Google OpenImages Version Three (3) dataset. Most of the human verifications have been done with in-house annotators at Google. A smaller part has been done with crowd-sourced verification from Image Labeler: Crowdsource app, g.co/imagelabeler.

Evaluation Dataset:

Link: https://github.com/openimages/dataset/blob/main/READMEV3.md
Data Collection Method by dataset:

  • Unknown

Labeling Method by dataset:

  • Unknown

Properties: 15,000 test images from Google OpenImages Version Three (3) dataset.

Running Object Detection Models Using TAO

The object detection apps in TAO expect data in KITTI file format. TAO provides a simple command line interface to train a deep learning model for object detection.

The models in this model area are only compatible with TAO Toolkit. For more information about the TAO container, please visit the TAO container page.

  1. Install the NGC CLI from ngc.nvidia.com

  2. Configure the NGC CLI using the following command

ngc config set
  1. To view all the backbones that are supported by object detection architecture in TAO:
ngc registry model list nvidia/tao_pretrained_detectnet_v2:*
  1. To download the model:
ngc registry model download-version nvidia/tao_pretrained_detectnet_v2:<template> --dest <path>

Instructions to run the sample notebook

  1. Get the NGC API key from the SETUP tab on the left. Please store this key for future use. Detailed instructions can be found here

  2. Configure the NGC command line interface using the command mentioned below and follow the prompts.

ngc config set
  1. Download the sample notebooks from NGC using the command below
ngc registry resource download-version "nvidia/tao_cv_samples:v1.0.2"
  1. Invoke the jupyter notebook using the following command
jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
  1. Open an internet browser and type in the following URL to start running the notebooks when running on a local machine.
http://0.0.0.0:8888

If you wish to run view the notebook from a remote client, please modify the URL as follows:

http://a.b.c.d:8888

Where, the a.b.c.d is the IP address of the machine running the container.

Technical blogs

  • Access the latest in Vision AI development workflows with NVIDIA TAO Toolkit 5.0
  • Improve accuracy and robustness of vision ai models with vision transformers and NVIDIA TAO
  • Train like a ‘pro’ without being an AI expert using TAO AutoML
  • Create Custom AI models using NVIDIA TAO Toolkit with Azure Machine Learning
  • Developing and Deploying AI-powered Robots with NVIDIA Isaac Sim and NVIDIA TAO
  • Learn endless ways to adapt and supercharge your AI workflows with TAO - Whitepaper
  • Customize Action Recognition with TAO and deploy with DeepStream
  • Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
  • Learn how to train real-time License plate detection and recognition app with TAO and DeepStream
  • Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO

Suggested reading

  • More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
  • TAO documentation
  • Read the TAO getting Started guide and release notes
  • If you have any questions or feedback, please refer to the discussions on TAO Toolkit Developer Forums
  • Deploy your models for video analytics application using DeepStream. Learn more about DeepStream SDK
  • Deploy your models in Riva for ConvAI use case

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Promise and the Explainability, Bias, Safety & Security, and Privacy Subcards.