NGC Catalog
CLASSIC
Welcome Guest
Models
PeopleNet Transformer v2.0

PeopleNet Transformer v2.0

For downloads and more information, please view on a desktop device.
Logo for PeopleNet Transformer v2.0
Description
3 class object detection network to detect people in an image.
Publisher
-
Latest Version
deployable_v1.0
Modified
February 3, 2024
Size
204.98 MB

PeopleNet Transformer v2 Model Card

Model Overview

The models described in this card detect one or more physical objects from three categories within an image and return a box around each object, as well as a category label for each object. Three categories of objects detected by these models are – persons, bags, and faces.

Model Architecture

This model is based on the DINO object detector with FAN-Small as a feature extractor. This architecture uses the attention modules that only attend to a small set of key sampling points around a reference to optimize training and inference speed.

Training

This model was trained using the DINO entrypoint in TAO. The training algorithm optimizes the network to minimize the localization and confidence loss for the objects. The training was conducted in two stages. The model was pretrained on OpenImages to learn the rich represenation, and then fine-tuned on the proprietary dataset for domain adaptation.

Pretraining Data

PeopleNet-Transformer-v2 model was pretrained on a subset of the OpenImages dataset. The diverse OpenImages dataset trained the transformer network for object understanding. Because the original OpenImages dataset did not densely annotate the images, a state-of-the-art DINO detector was trained on 80 COCO classes to generate pseudo-labels on about 800K images from OpenImages.

Fine-Tuning Data

After pretraining is complete, the PeopleNet-Transformer-v2 model was fine-tuned on a proprietary dataset with more than 1.5 million images and more than 27 million objects for the person class. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV). Approximately half of the training data consisted of images captured in an indoor office environment. For this case, the camera is typically set up at approximately 10 feet height, 45-degree angle, and has close field-of-view. This content was chosen to improve accuracy of the models for convenience-store retail analytics use-case.

Training Dataset Object Distribution
Category Number of Images Number of Persons Number of Bags Number of Faces
Natural 1444335 16794728 4156515 6283255
Simulation 27417 366110 0 92916
Total 1544939 17160838 4156515 6376171

Training Data Ground-Truth Labeling Guidelines

The training dataset is created by labeling ground-truth bounding-boxes and categories by human labelers. The following guidelines were used while labelling the training data for NVIDIA PeopleNet model. If you are looking to re-train with your own dataset, use the following the guidelines for highest accuracy.

PeopleNet-Transformer project labelling guidelines:

  • All objects that fall under one of the three classes (person, face, bag) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.

  • If a person is carrying an object, mark the bounding-box to include the carried object as long as it doesn’t affect the silhouette of the person. For example, exclude a rolling bag if they are pulling it behind them and are distinctly visible as separate object. But include objects, including a backpack or purse, that does not alter the silhouette of the pedestrian significantly.

  • Occlusion: For partially occluded objects that do not belong to a person class, where the objects are visible approximately 60% or are marked as visible objects with a bounding box around the visible part of the object. These objects are marked as partially occluded. Objects with under 60% visibility are not annotated.

  • Occlusion for person class: If an occluded person’s head and shoulders are visible and the visible height is approximately 20% or more, then these objects are marked by the bounding box around the visible part of the person object. If the head and shoulders are not visible, follow the Occlusion guidelines above.

  • Truncation: Objects, other than a person, that are at the edge of the frame with a visibility of 60% or more are marked with the truncation flag for the object.

  • Truncation for person class: If a truncated person’s head and shoulders are visible, and the visible height is approximately 20% or more, mark the bounding box around the visible part of the person object. If the head and shoulders are not visible, follow the Truncation guidelines above.

  • A frame is not required to have an object.

Performance

Evaluation Data

The inference performance of PeopleNet-Transformer-v2 model was measured against more than 90000 proprietary images across a variety of environments.

Methodology and KPI

The true positives, false positives, and false negatives are calculated using intersection-over-union (IOU) criterion greater than 0.5. In addition, a KPI with IOU criterion of greater than 0.8 was added for extended-hand sequences where a tight bounding box is a requirement for subsequent human pose estimation algorithms. The KPI for the evaluation data are reported in the table below. The model is evaluated based on precision, recall, and accuracy.

Network PeopleNet-Transformer-v2 FP16
Content Precision Recall Accuracy
Generic 94.05 91.37 86.41
Office 96.07 97.32 93.59
Café 92.56 93.08 86.60
Low-contrast 90.99 85.44 78.78
Extended-hands (IOU > 0.8) 95.01 93.68 89.29

Real-Time Inference Performance

The inference is run on the provided model at FP16 precision. The inference performance is run using trtexec on Jetson AGX Xavier, Xavier NX, Orin, Orin NX and NVIDIA T4, and Ampere GPU. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might vary depending on other bottlenecks in the hardware and software.

Platform BS FPS
Jetson Orin Nano 4 4
Orin NX 16GB 4 6
AGX Orin 64GB 8 15
A2 32 12
T4 32 19
A30 16 57
L4 32 30
L40 32 89
A100 32 121
H100 32 211

How to Use This Model

These models need to be used with NVIDIA hardware and software. For hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK, or TensorRT.

The primary intention for these models is detecting people in a color (RGB) image. The model can be used to detect people from photos and videos by using appropriate video or image decoding and pre-processing. The model can also be used to detect bags and faces from images or videos. However, these additional classes are not the main intended use for these models.

The model is intended for training and fine-tune with the Train Adapt Optimize (TAO) Toolkit and yourdataset of re-identification. High fidelity models can be trained to the new use cases. There is a Jupyter Notebook available as a part of TAO container that can be used to re-train.

The model is also intended for easy deployment to the edge using DeepStream SDK or TensorRT. DeepStream provides a facility to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference.

Input

B X 3 X 544 X 960 (B C H W)

Output

Category labels (people) and bounding-box coordinates for each detected person in the input image.

Input Image

Output Image

Instructions to Use the Pretrained Model with TAO

To use these models as a pretrained weights for transfer learning, use the following code snippet as a template for the model and train component of the experiment spec file to train a Deformable DETR model. For more information on the experiment spec file, see the TAO Toolkit User Guide.

dataset:
  num_classes: 4
train:
  pretrained_model_path: /path/to/the/model.pth
model:
  backbone: fan_small
  train_backbone: True
  num_feature_levels: 4
  dec_layers: 6
  enc_layers: 6
  num_queries: 900
  dropout_ratio: 0.0
  dim_feedforward: 2048

Instructions to Deploy These Models with DeepStream

To create the entire end-to-end video analytic application, deploy these models with DeepStream SDK. DeepStream SDK is a streaming analytic toolkit that accelerates building AI-based video analytic applications. DeepStream supports direct integration of these models into the DeepStream sample app.

To deploy these models with DeepStream 6.1, use the following instructions:

  1. Download and install DeepStream SDK. The installation instructions for DeepStream are provided in DeepStream development guide. The config files for the purpose-built models are located in:

/opt/nvidia/deepstream, which is the default DeepStream installation directory. This path is different if you are installing in a different directory.

  1. You must have one config file and one label file. These files are provided in NVIDIA-AI-IOT.
pgie_ddetr_tao_config.txt - Main config file for DeepStream app
ddetr_labels.txt - Label file with 3 classes

Key Parameters in pgie_ddetr_tao_config.txt

labelfile-path=../../models/ddetr/ddetr_labels.txt
model-engine-file=../../models/dino/dino.engine
onnx-file=../../models/dino/dino.onnx
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
output-blob-names=pred_boxes;pred_logits
infer-dims=3;544;960

Run deepstream-app:

deepstream-app -c deepstream_app_source1_ddetr.txt

Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.

Limitations

Very Small Objects

NVIDIA PeopleNet Transformer model were trained to detect objects larger than 10x10 pixels. Therefore it may not be able to detect objects that are smaller than 10x10 pixels.

Occluded Objects

When objects are occluded or truncated such that less than 20% of the object is visible, they may not be detected by the PeopleNet Transformer model. For people class objects, the model detects occluded people as long as the head and shoulders are visible. However if the person’s head and/or shoulders are not visible, the object might not be detected, unless more than 60% of the person is visible.

Dark-Lighting, Monochrome, or Infrared Camera Images

The PeopleNet Transformer model was trained on RGB images in good lighting conditions. Therefore, images captured in dark lighting conditions, a monochrome image, or an IR camera image might not provide good detection results.

Warped and Blurry Images

The PeopleNet Transformer models were not trained on fish-eye lense cameras or moving cameras. Therefore, the models might not perform well for warped images and images that have motion-induced or other blur.

Face and Bag Class

Although bag and face classes are included in the model, the accuracy of these classes is much lower than the people class. Some re-training is required on these classes to improve accuracy.

Model Versions

There are two versions of .onnx files provided. Use peoplenet_transformer_v2_op12.onnx for TensorRT version 8.5 or below. Use peoplenet_transformer_v2_op17.onnx for TensorRT version 8.6 or above.

  • trainable_v1.0 - Pre-trained model for PeopleNet Transformer-v2.
  • deployable_v1.0 - Model deployable to DeepStream or TensorRT.

References

Citations

  • Zhang, H., Li, F., Liu, S., Zhang, L., Su, H., Zhu, J., Ni, L., Shum, H.: DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection
  • Zhou, D., Yu, Z., Xie, E., Xiao, C., Anandkumar, A., Feng, J., Alvarez, J.: Understanding The Robustness in Vision Transformers

Using TAO Pre-Trained Models

  • Get TAO Container
  • Get other purpose-built models from NGC model registry:
    • TrafficCamNet
    • DashCamNet
    • FaceDetectIR
    • VehicleMakeNet
    • VehicleTypeNet
    • ActionRecognitionNet
    • PoseClassificationNet
    • ReIdentificationNet

License

The license to use the model is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses.

Technical Blogs

  • Access the latest in Vision AI development workflows with NVIDIA TAO Toolkit 5.0
  • Improve accuracy and robustness of vision ai models with vision transformers and NVIDIA TAO
  • Train like a ‘pro’ without being an AI expert using TAO AutoML
  • Create Custom AI models using NVIDIA TAO Toolkit with Azure Machine Learning
  • Developing and Deploying AI-powered Robots with NVIDIA Isaac Sim and NVIDIA TAO
  • Learn endless ways to adapt and supercharge your AI workflows with TAO - Whitepaper
  • Customize Action Recognition with TAO and deploy with DeepStream
  • Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
  • Learn how to train real-time License plate detection and recognition app with TAO and DeepStream.
  • Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO

Suggested Reading

  • More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
  • TAO documentation
  • Read the TAO getting Started guide and release notes.
  • If you have any questions or feedback, see the discussions on TAO Toolkit Developer Forums
  • Deploy your models for video analytics application using DeepStream. Learn more about DeepStream SDK
  • Deploy your models in Riva for ConvAI use case.

Ethical AI

NVIDIA ReIdentificationNet model creates embeddings for identifying objects captured in different scenes.

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.