NGC Catalog
CLASSIC
Welcome Guest
Models
PeopleNet - AMR

PeopleNet - AMR

For downloads and more information, please view on a desktop device.
Logo for PeopleNet - AMR
Description
3 class object detection network to detect people in an image.
Publisher
-
Latest Version
deployable_v1.0
Modified
August 19, 2024
Size
78.77 MB

PeopleNet - AMR Model Card

Model Overview

The Peoplenet Autonomous Mobile Robot (AMR) model detects persons, bags, and faces in an image. This model is ready for commercial use.

References

Citations

  • Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: CVPR. (2016)
  • Erhan, D., Szegedy, C., Toshev, A., Anguelov, D.: Scalable object detection using deep neural networks, In: CVPR. (2014)
  • He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. In: CVPR (2015)

Using TAO Pre-trained Models

  • Get TAO Container
  • Get other Purpose-built models from NGC model registry:
    • PeopleNet
    • TrafficCamNet
    • FaceDetectIR
    • VehicleMakeNet
    • VehicleTypeNet

Model Architecture

Architecture Type: Convolution Neural Network (CNN)
Network Architecture: DetectNet_v2 + ResNet34 (Feature Extractor)

This model is based on NVIDIA DetectNet_v2 detector with ResNet34 as feature extractor. This architecture, also known as GridBox object detection, uses bounding-box regression on a uniform grid on the input image. Gridbox system divides an input image into a grid which predicts four normalized bounding-box parameters (xc, yc, w, h) and confidence value per output class.

The raw normalized bounding-box and confidence detections needs to be post-processed by a clustering algorithm such as DBSCAN or NMS to produce final bounding-box coordinates and category labels.

Input:

Input Type(s): Images
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: 4D
Other Properties Related to Input: RGB Fixed Resolution: 960 X 544 X 3 (W x H x C) Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (544), W = Width of the images (960) Input scale: 1/255.0 Mean subtraction: None; No minimum bit depth, alpha, or gamma.

Input image

Output:

Output Type(s): Label(s), Bounding-Box(es), Confidence Scores
Output Format: Label: Text String(s); Bounding Box: (x-coordinate, y-coordinate, width, height), Confidence Scores: Floating Point
Other Properties Related to Output: Category Label(s): Bag, Face, Person, Bounding Box Coordinates, Confidence Scores

Output image

Software Integration:

Runtime Engine(s):

  • TAO 5.1

Supported Hardware Architecture Compatibility:

  • Ampere
  • Jetson
  • Hopper
  • Lovelace
  • Pascal
  • Turing

Preferred Operating System(s):

  • Linux
  • Linux 4 Tegra

Model versions

  • deployable_v1.0
  • trainable_v1.0

Training

  • Data Collection Method by dataset
    • Automatic/Sensors
  • Labeling Method by dataset:
    • Human

This model was trained using the DetectNet_v2 entrypoint in TAO. The training algorithm optimizes the network to minimize the localization and confidence loss for the objects. The training is carried out in two phases. In the first phase, the network is trained without regularization.

Training Data Properties

Internal, proprietary dataset in excess of 3 million images of more than 8 million people. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV) taken from a Hawkeye camera. Approximately two thirds of the training data consisted of images captured in indoor and outdoor environments from a horizontal view point. The camera is typically set up at approximately 2 to 5 feet in height, at a 90-degree angle with a wide field-of-view. We have also added approximately 45 thousand images with low-density scenes from a robot's point of view to improve the performance for use cases where person object detection is needed at low heights.

Category Number of Images Number of Persons Number of Bags Number of Faces
Natural 1,920,657 6,592,311 1,811,658 3,032,396
-- Robotics Subset 43,076 160,806 40,413 24,109
Rotated 1,020,163 1,800,844 291,754 963,805
Total 3,028,550 8,553,961 2,143,825 4,020,310
Training Data Ground-truth Labeling Guidelines

The training dataset is created by labeling ground-truth bounding-boxes and categories by human labellers. Following guidelines were used while labelling the training data for NVIDIA PeopleNet AMR model. If you are looking to re-train with your own dataset, please follow the guideline below for highest accuracy.

PeopleNet AMR project labelling guidelines:

  1. All objects that fall under one of the three classes (person, face, bag) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.

  2. If a person is carrying an object please mark the bounding-box to include the carried object as long as it doesn’t affect the silhouette of the person. For example, exclude a rolling bag if they are pulling it behind them and are distinctly visible as separate object. But include a backpack, purse etc. that do not alter the silhouette of the pedestrian significantly.

  3. Occlusion: For partially occluded objects that do not belong a person class and are visible approximately 60% or are marked as visible objects with bounding box around visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.

  4. Occlusion for person class: If an occluded person’s head and shoulders are visible and the visible height is approximately 20% or more, then these objects are marked by the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Occlusion guidelines in item 3 above.

  5. Truncation: For an object other than a person that is at the edge of the frame with visibility of 60% or more visible are marked with the truncation flag for the object.

  6. Truncation for person class: If a truncated person’s head and shoulders are visible and the visible height is approximately 20% or more mark the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Truncation guidelines in item 5 above.

  7. Each frame is not required to have an object.

Performance

Evaluation Data Properties

  • Data Collection Method by dataset
    • Automatic/Sensors
  • Labeling Method by dataset:
    • Human

15000 images from an internal, proprietary dataset in excess of 3 million images of more than 8 million people. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV) taken from a camera. Approximately two thirds of the training data consisted of images captured in indoor and outdoor environments from a horizontal view point. The camera is typically set up at approximately 2 to 5 feet in height, at a 90-degree angle with a wide field-of-view. We have also added approximately 45 thousand images with low-density scenes from a robot's point of view to improve the performance for use cases where person object detection is needed at low heights.

Methodology and KPI

The true positives, false positives, false negatives are calculated using intersection-over-union (IOU) criterion greater than 0.5. The KPI for the evaluation data are reported in the table below. The FP16 Model is evaluated based on precision, recall and accuracy.

Content Precision Recall Accuracy
Generic 98.75 94.71 85.47
Office 94.24 78.05 74.50
Robotics 89.01 83.14 76.39
Extended Hands 97.00 89.84 87.41
Extended-hands (IOU > 0.8) 91.50 84.76 78.57
People (IOU > 0.8) 82.56 77.62 66.69
Robotics (IOU > 0.8) 84.81 80.28 70.91

How to use this model

These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit or TensorRT.

The primary use case intended for these models is detecting people in a color (RGB) image. The model can be used to detect people from photos and videos by using appropriate video or image decoding and pre-processing. As a secondary use case the model can also be used to detect bags and faces from images or videos. However, these additional classes are not the main intended use for these models.

There are two flavors of these models:

  • trainable (unpruned)
  • deployable (unpruned quantized)

The trainable or unpruned models intended for training using TAO Toolkit and the user's own dataset. This can provide high fidelity models that are adapted to the use case. The Jupyter notebook available as a part of TAO container can be used to re-train.

Instructions to use unpruned model with TAO

In order, to use these models as a pretrained weights for transfer learning, please use the snippet below as template for the model_config component of the experiment spec file to train a DetectNet_v2 model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

  1. For ResNet34
model_config {
  num_layers: 34
  pretrained_model_file: "/path/to/the/model.tlt"
  use_batch_norm: true
  objective_set {
    bbox {
      scale: 35.0
      offset: 0.5
    }
    cov {
    }
  }
  training_precision {
    backend_floatx: FLOAT32
  }
  arch: "resnet"
  all_projections: true
}

Technical blogs

  • Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
  • Learn how to train real-time License plate detection and recognition app with TAO and DeepStream.
  • Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO
  • Learn how to train Instance segmentation model using MaskRCNN with TAO
  • Learn how to improve INT8 accuracy using Quantization aware training(QAT) with TAO
  • Read the technical tutorial on how PeopleNet model can be trained with custom data using Transfer Learning Toolkit

Suggested reading

  • More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
  • TAO documentation
  • Read the TAO getting Started guide and release notes.
  • If you have any questions or feedback, please refer to the discussions on TAO Toolkit Developer Forums

Ethical Considerations

NVIDIA PeopleNet AMR model detects faces. However, no additional information such as race, gender, and skin type about the faces is inferred.

Training and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.