NGC Catalog
CLASSIC
Welcome Guest
Models
PeopleNet Transformer

PeopleNet Transformer

For downloads and more information, please view on a desktop device.
Logo for PeopleNet Transformer
Description
3 class object detection network to detect people in an image.
Publisher
-
Latest Version
deployable_v1.1
Modified
August 19, 2024
Size
179.25 MB

PeopleNet Transformer Model Card

Description:

The PeopleNet Transformer detects persons, bags, and faces in an image. This model is ready for commercial use.

References:

Citations

  • Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable DETR: Deformable Transformers for End-to-End Object Detection

Using TAO Pre-trained Models

  • Get TAO Container
  • Get other purpose-built models from the NGC model registry:
    • TrafficCamNet
    • PeopleNet
    • PeopleNet-Transformer
    • DashCamNet
    • FaceDetectIR
    • VehicleMakeNet
    • VehicleTypeNet
    • PeopleSegNet
    • PeopleSemSegNet
    • License Plate Detection
    • License Plate Recognition
    • PoseClassificationNet
    • Facial Landmark
    • FaceDetect
    • 2D Body Pose Estimation
    • ActionRecognitionNet
    • People ReIdentification
    • PointPillarNet
    • CitySegFormer
    • Retail Object Detection
    • Retail Object Embedding
    • Optical Inspection
    • Optical Character Detection
    • Optical Character Recognition
    • PCB Classification
    • PeopleSemSegFormer

Model Architecture:

Architecture Type: Transformer
Network Architecture: Deformable DETR + ResNet50 (Feature Extractor)

This model is based on the Deformable DETR object detector with ResNet50 as a feature extractor. This architecture utilizes the attention modules that only attend to a small set of key sampling points around a reference to optimize training and inference speed. PeopleNet-Transformer was modified from the original Deformable DETR by reducing the number of features from the backbone from 4 to 2 for optimized performance.

Input:

Input Type(s): Images
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: Four Dimensional (4D)
Other Properties Related to Input:

  • RGB Fixed Resolution: 960 X 544 X 3 (W x H x C) Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (544), W = Width of the images (960) Input scale: 1/255.0 Mean subtraction: None; No minimum bit depth, alpha, or gamma.

Output:

Output Type(s): Label(s), Bounding-Box(es), Confidence Scores
Output Format: Label: Text String(s); Bounding Box: (x-coordinate, y-coordinate, width, height), Confidence Scores: Floating Point
Other Properties Related to Output: Category Label(s): Bag, Face, Person, Bounding Box Coordinates, Confidence Scores

Software Integration:

Runtime Engine(s):

  • TAO - 5.2
  • DeepStream 6.1 or later

Supported Hardware Architecture(s):

  • Ampere
  • Jetson
  • Hopper
  • Lovelace
  • Pascal
  • Turing

Supported Operating System(s):

  • Linux
  • Linux 4 Tegra

Model Version(s):

We provide two versions of .onnx files. resnet50_peoplenet_transformer_op12.onnx should be used for TensorRT version 8.5 or below. resnet50_peoplenet_transformer_op17.onnx should be used for TensorRT version 8.6 or above.

  • trainable_v1.1 - Pre-trained model for PeopleNet Transformer.
  • deployable_v1.1 - Model deployable to DeepStream or TensorRT.

Training & Evaluation:

Training Dataset:

Data Collection Method by dataset:

  • Automatic/Sensors

Labeling Method by dataset:

  • Human

Properties:
Proprietary dataset with more than 1.5 million images of more than 39 million people. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV). Approximately half of the training data consisted of images captured in an indoor office environment. For images in the indoor office environment, the camera is typically set up at approximately 10 feet height, 45-degree angle and has close field-of-view.

Training Dataset Object Distribution
Category Number of Images Number of Persons Number of Bags Number of Faces
Natural 1043763 16609798 4238989 5690946
Rotated 501176 7666737 2021704 2662528
Simulation 27417 368914 0 92916
Total 1544939 24645449 6260693 8446390

Training Data Ground-truth Labeling Guidelines

The training dataset is created by labeling ground-truth bounding-boxes and categories by human labelers. Following guidelines were used while labelling the training data for NVIDIA PeopleNet model. If you are looking to re-train with your own dataset, please follow the guideline below for highest accuracy.

PeopleNet-Transformer project labelling guidelines:

  1. All objects that fall under one of the three classes (person, face, bag) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.

  2. If a person is carrying an object please mark the bounding-box to include the carried object as long as it doesn’t affect the silhouette of the person. For example, exclude a rolling bag if they are pulling it behind them and are distinctly visible as separate object. But include a backpack, purse etc. that do not alter the silhouette of the pedestrian significantly.

  3. Occlusion: For partially occluded objects that do not belong a person class and are visible approximately 60% or are marked as visible objects with bounding box around visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.

  4. Occlusion for person class: If an occluded person’s head and shoulders are visible and the visible height is approximately 20% or more, then these objects are marked by the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Occlusion guidelines in item 3 above.

  5. Truncation: For an object other than a person that is at the edge of the frame with visibility of 60% or more visible are marked with the truncation flag for the object.

  6. Truncation for person class: If a truncated person’s head and shoulders are visible and the visible height is approximately 20% or more mark the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Truncation guidelines in item 5 above.

  7. Each frame is not required to have an object.

Evaluation Dataset:

Data Collection Method by dataset:

  • Automatic/Sensors

Labeling Method by dataset:

  • Human

Properties:
90000 images from an internal, proprietary dataset in excess of 7.6 million images of more than 71 million people.. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV) taken from a camera. Approximately half of the training data consisted of images captured in an indoor office environment. The camera is typically set up at approximately 10 feet height, 45-degree angle and has close field-of-view. This was done to improve accuracy of the models for convenience-store retail analytics use-case. We have also incorporated approximately 500 thousand images with low-density scenes with people extending their hands and feet to improve the performance for use-cases where person object detection is followed by pose-estimation.

Methodology and KPI

The true positives, false positives, false negatives are calculated using intersection-over-union (IOU) criterion greater than 0.5. In addition, we have also added KPI with IOU criterion of greater than 0.8 for extended-hand sequences where tight bounding box is a requirement for subsequent human pose estimation algorithms. The KPI for the evaluation data are reported in the table below. Model is evaluated based on precision, recall, and accuracy.

Network PeopleNet-Transformer FP32 PeopleNet-Transformer FP16
Content Precision Recall Accuracy Precision Recall Accuracy
Generic 94.27 83.69 79.69 93.74 83.06 78.74
Office 95.65 93.87 90.63 96.07 93.10 89.68
Café 94.21 81.98 78.05 94.04 81.43 77.43
Extended-hands (IOU > 0.8) 86.87 79.12 70.67 86.21 77.82 69.20

Inference:

Engine: Tensor(RT)
Test Hardware:

  • Jetson AGX Xavier
  • Xavier NX
  • Orin
  • Orin NX
  • NVIDIA T4
  • Ampere GPU
  • A2
  • A30
  • L4
  • T4
  • DGX H100
  • DGX A100
  • DGX H100
  • L40
  • JAO 64GB
  • Orin NX16GB
  • Orin Nano 8GB

The inference is run on the provided model at FP16 precision. The inference performance is run using trtexec on Jetson AGX Xavier, Xavier NX, Orin, Orin NX and NVIDIA T4, and Ampere GPU. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.

Model Arch Inference Resolution Precision Batch size
PeopleNet-Transformer 3x960x544 FP16 1
Model Arch Xavier NX AGX Xavier Orin NX 16GB Orin 64GB T4 A100 A30 A10 A2
PeopleNet-Transformer 9 14 13 32 43 176 110 93 26

How to use this model

These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.

The primary use case intended for these models is detecting people in a color (RGB) image. The model can be used to detect people from photos and videos by using appropriate video or image decoding and pre-processing. As a secondary use case the model can also be used to detect bags and faces from images or videos. However, these additional classes are not the main intended use for these models.

It is intended for training and fine-tune using Train Adapt Optimize (TAO) Toolkit and the users' dataset of re-identification. High fidelity models can be trained to the new use cases. The Jupyter notebook available as a part of TAO container can be used to re-train.

The model is also intended for easy deployment to the edge using DeepStream SDK or TensorRT. DeepStream provides facility to create efficient video analytic pipelines to capture, decode and pre-process the data before running inference.

Instructions to use pretrained model with TAO

In order, to use these models as a pretrained weights for transfer learning, please use the snippet below as template for the model and train component of the experiment spec file to train a Deformable DETR model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

train:
  pretrained_model_path: /path/to/the/model.pth
model:
  backbone: resnet_50
  num_feature_levels: 2
  return_interm_indices: [1, 2]
  dec_layers: 6
  enc_layers: 6
  num_queries: 300
  with_box_refine: True
  dropout_ratio: 0.3

Instructions to deploy these models with DeepStream

To create the entire end-to-end video analytic application, deploy these models with DeepStream SDK. DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. DeepStream supports direct integration of these models into the deepstream sample app.

To deploy these models with DeepStream 6.1, please follow the instructions below:

Download and install DeepStream SDK. The installation instructions for DeepStream are provided in DeepStream development guide. The config files for the purpose-built models are located in:

/opt/nvidia/deepstream is the default DeepStream installation directory. This path will be different if you are installing in a different directory.

You will need 1 config files and 1 label file. These files are provided in NVIDIA-AI-IOT.

pgie_ddetr_tao_config.txt - Main config file for DeepStream app
ddetr_labels.txt - Label file with 3 classes

Key Parameters in pgie_ddetr_tao_config.txt

labelfile-path=../../models/ddetr/ddetr_labels.txt
model-engine-file=../../models/ddetr/ddetr_266_fp16.engine
onnx-file=../../models/ddetr/ddetr_266.onnx
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
output-blob-names=pred_boxes;pred_logits
infer-dims=3;544;960

Run deepstream-app:

deepstream-app -c deepstream_app_source1_ddetr.txt

Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.

Technical blogs

  • Access the latest in Vision AI development workflows with NVIDIA TAO Toolkit 5.0
  • Improve accuracy and robustness of vision ai models with vision transformers and NVIDIA TAO
  • Train like a ‘pro’ without being an AI expert using TAO AutoML
  • Create Custom AI models using NVIDIA TAO Toolkit with Azure Machine Learning
  • Developing and Deploying AI-powered Robots with NVIDIA Isaac Sim and NVIDIA TAO
  • Learn endless ways to adapt and supercharge your AI workflows with TAO - Whitepaper.
  • Customize Action Recognition with TAO and deploy with DeepStream
  • Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
  • Learn how to train real-time License plate detection and recognition app with TAO and DeepStream.
  • Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO.

Suggested reading

  • More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
  • TAO documentation
  • Read the TAO getting Started guide and release notes.
  • Deploy your models for video analytics application using DeepStream. Learn more about DeepStream SDK.
  • Deploy your models in Riva for ConvAI use case.
  • If you have any questions or feedback, please refer to the discussions on TAO Toolkit Developer Forums.

Ethical Considerations:

NVIDIA PeopleNet model detects faces. However, no additional information such as race, gender, and skin type about the faces is inferred.

Training and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Promise and the Explainability, Bias, Safety & Security, and Privacy Subcards.