NGC | Catalog
Welcome Guest
CatalogModelsPeopleNet

PeopleNet

For downloads and more information, please view on a desktop device.
Logo for PeopleNet

Description

3 class object detection network to detect people in an image.

Publisher

NVIDIA

Use Case

Object Detection

Framework

Transfer Learning Toolkit

Latest Version

pruned_v1.0

Modified

August 24, 2021

Size

11.3 MB

PeopleNet Model Card

Model Overview

The models described in this card detect one or more physical objects from three categories within an image and return a box around each object, as well as a category label for each object. Three categories of objects detected by these models are – persons, bags and faces.

Model Architecture

These models are based on NVIDIA DetectNet_v2 detector with ResNet34 as feature extractor. This architecture, also known as GridBox object detection, uses bounding-box regression on a uniform grid on the input image. Gridbox system divides an input image into a grid which predicts four normalized bounding-box parameters (xc, yc, w, h) and confidence value per output class.

The raw normalized bounding-box and confidence detections need to be post-processed by a clustering algorithm such as DBSCAN or NMS to produce final bounding-box coordinates and category labels.

Training Algorithm

The training algorithm optimizes the network to minimize the localization and confidence loss for the objects. The training is carried out in two phases. In the first phase, the network is trained with regularization to facilitate pruning. Following the first phase, we prune the network removing channels whose kernel norms are below the pruning threshold. In the second phase the pruned network is retrained. For quantized INT8 model a third quantization-aware training (QAT) phase is carried out. Regularization is not included in second and third phase.

Citations

  • Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: CVPR. (2016)
  • Erhan, D., Szegedy, C., Toshev, A., Anguelov, D.: Scalable object detection using deep neural networks, In: CVPR. (2014)
  • He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. In: CVPR (2015)

Intended Use

Primary use case intended for these models is detecting people in a color (RGB) image. The model can be used to detect people from photos and videos by using appropriate video or image decoding and pre-processing. As a secondary use case the model can also be used to detect bags and faces from images or videos. However, these additional classes are not the main intended use for these models.

Input

RGB Image of dimensions: 960 X 544 X 3 (W x H x C) Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (544), W = Width of the images (960) Input scale: 1/255.0 Mean subtraction: None

Output

Category labels (people) and bounding-box coordinates for each detected people in the input image.

How to use this model

These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Transfer Learning Toolkit (TLT), DeepStream SDK or TensorRT.

There are two flavors of these models:

  • unpruned
  • pruned

The unpruned models intended for training using Transfer Learning Toolkit and the user's own dataset. This can provide high fidelity models that are adapted to the use case. The Jupyter notebook available as a part of TLT container can be used to re-train.

The pruned models are intended for efficient deployment on the edge using DeepStream SDK or TensorRT. These models accept 960x544x3 dimension input tensors and outputs 60x34x12 bbox coordinate tensor and 60x34x3 class confidence tensor. DeepStream provides a toolkit to create efficient video analytic pipelines to capture, decode, and pre-process the data before running inference. DeepStream will then post-process the output bbox coordinate tensor and class confidence tensors with NMS or DBScan clustering algorithm to create appropriate bounding boxes. The sample application and config file to run these models are provided in DeepStream SDK.

The unpruned and pruned models are encrypted and will only operate with the following key:

  • Model load key: tlt_encode

Please make sure to use this as the key for all TLT commands that require a model load key.

Model versions

  • unpruned_v2.1 - ResNet34 based pre-trained model. Intended for training
  • pruned_v2.1 - ResNet34 floating point deployment model.
  • quantized_v2.1 - ResNet34 INT8 deployment model. Contains calibration cache for GPU and DLA. DLA one is required if running inference on Jetson AGX Xavier or Xavier NX DLA.

Instructions to use unpruned model with TLT

In order, to use these models as a pretrained weights for transfer learning, please use the snippet below as template for the model_config component of the experiment spec file to train a DetectNet_v2 model. For more information on the experiment spec file, please refer to the Transfer Learning Tookit User Guide.

  1. For ResNet34
model_config {
  num_layers: 34
  pretrained_model_file: "/path/to/the/model.tlt"
  use_batch_norm: true
  objective_set {
    bbox {
      scale: 35.0
      offset: 0.5
    }
    cov {
    }
  }
  training_precision {
    backend_floatx: FLOAT32
  }
  arch: "resnet"
  all_projections: true
}

Instructions to deploy these models with DeepStream

To create the entire end-to-end video analytics application, deploy this model with DeepStream SDK. DeepStream SDK is a streaming analytics toolkit to accelerate deployment of AI-based video analytics applications. The pruned model included here can be integrated directly into deepstream by following the instructions mentioned below.

  1. Run the default deepstream-app included in the DeepStream docker, by simply executing the commands below.

    ## Download Model:
    
    mkdir -p $HOME/peoplenet && \
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_peoplenet/versions/pruned_v1.0/files/resnet34_peoplenet_pruned.etlt \
    -O $HOME/peoplenet/resnet34_peoplenet_pruned.etlt
    
    ## Run Application
    
    xhost +
    docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME:/opt/nvidia/deepstream/deepstream-5.1/samples/models/tlt_pretrained_models \
    -w /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models nvcr.io/nvidia/deepstream:5.1-21.02-samples \
    deepstream-app -c deepstream_app_source1_peoplenet.txt
    
  2. Install deepstream on your local host and run the deepstream-app.

    Download and install DeepStream SDK. The installation instructions for DeepStream are provided in DeepStream development guide. The config files for the purpose-built models are located in:

    /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models
    

    /opt/nvidia/deepstream is the default DeepStream installation directory. This path will be different if you are installing in a different directory.

    You will need 2 config files and 1 label file. These files are provided in the tlt_pretrained_models directory.

    deepstream_app_source1_peoplenet.txt - Main config file for DeepStream app
    config_infer_primary_peoplenet.txt - File to configure inference settings
    labels_peoplenet.txt - Label file with 3 classes
    

    Key Parameters in config_infer_primary_peoplenet.txt

    tlt-model-key=
    tlt-encoded-model=
    labelfile-path=
    int8-calib-file=
    input-dims=
    num-detected-classes=
    

    Run deepstream-app:

    deepstream-app -c deepstream_app_source1_peoplenet.txt
    

    Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TLT User Guide.

Example

Input image

Output image

Training Data

PeopleNet v1.0 model was trained on a proprietary dataset with more than 17 million objects for person class. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV). Approximately half of the training data consisted of images captured in an indoor office environment. For this case, the camera is typically set up at approximately 10 feet height, 45-degree angle and has close field-of-view. This content was chosen to improve accuracy of the models for convenience-store retail analytics use-case.

Object Distribution
Environment Images Persons Bags Faces
5ft Indoor 108,692 1,060,960 664,251 235,992
5ft Outdoor 206,912 1,668,250 819,518 657,162
10ft Indoor (Office close FOV) 453,034 6,673,994 1,080,515 2,344,048
10ft Outdoor 18,321 178,817 74,047 56,097
20ft Indoor 134,521 2,124,981 787,884 633,802
20ft Outdoor 24,783 59,623 46,182 27,840
Random Rotated -15deg 75,471 880,995 273,716 301,663
Random Rotated -30deg 75,264 874,063 271,840 297,110
Random Rotated -45deg 75,259 872,815 271,840 297,110
Random Rotated 15deg 75,483 877,374 273,215 302,024
Random Rotated 30deg 75,478 872,110 273,093 298,462
Random Rotated 45deg 75,472 875,688 274,168 297,835
Total 1,398,690 17,019,670 5,110,470 5,750,191

Training Data Ground-truth Labeling Guidelines

The training dataset is created by labeling ground-truth bounding-boxes and categories by human labellers. Following guidelines were used while labelling the training data for NVIDIA PeopleNet model. If you are looking to re-train with your own dataset, please follow the guideline below for highest accuracy.

PeopleNet project labelling guidelines:

  1. All objects that fall under one of the three classes (person, face, bag) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.

  2. If a person is carrying an object please mark the bounding-box to include the carried object as long as it doesn’t affect the silhouette of the person. For example, exclude a rolling bag if they are pulling it behind them and are distinctly visible as separate object. But include a backpack, purse etc. that do not alter the silhouette of the pedestrian significantly.

  3. Occlusion: For partially occluded objects that do not belong a person class and are visible approximately 60% or are marked as visible objects with bounding box around visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.

  4. Occlusion for person class: If an occluded person’s head and shoulders are visible and the visible height is approximately 20% or more, then these objects are marked by the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Occlusion guidelines in item 3 above.

  5. Truncation: For an object other than a person that is at the edge of the frame with visibility of 60% or more visible are marked with the truncation flag for the object.

  6. Truncation for person class: If a truncated person’s head and shoulders are visible and the visible height is approximately 20% or more mark the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Truncation guidelines in item 5 above.

  7. Each frame is not required to have an object.

Evaluation Data

Dataset

The inference performance of PeopleNet v2.1 model was measured against 50000 proprietary images across a variety of environments. The frames are high resolution images 1920x1080 pixels resized to 960x544 pixels before passing to the PeopleNet detection model.

Methodology and KPI

The true positives, false positives, false negatives are calculated using intersection-over-union (IOU) criterion greater than 0.5. The KPI for the evaluation data are reported in the table below. Model is evaluated based on precision, recall and accuracy.

Model PeopleNetV2.1 FP16 PeopleNetV2.1 INT8
Content Precision Recall Accuracy Precision Recall Accuracy
Generic 90.78 86.28 79.38 90.70 84.92 78.17
Office 95.00 88.44 84.51 94.08 88.46 83.79
Rotation 95.00 88.44 84.51 90.70 93.43 85.87

Real-time Inference Performance

The inference is run on the provided pruned models at INT8 precision. On the Jetson Nano FP16 precision is used. The inference performance is run using trtexec on Jetson Nano, AGX Xavier, Xavier NX and NVIDIA T4 GPU. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.

Limitations

Very Small Objects

NVIDIA PeopleNet model were trained to detect objects larger than 10x10 pixels. Therefore it may not be able to detect objects that are smaller than 10x10 pixels.

Occluded Objects

When objects are occluded or truncated such that less than 20% of the object is visible, they may not be detected by the PeopleNet model. For people class objects, the model will detect occluded people as long as head and shoulders are visible. However if the person’s head and/or shoulders are not visible, the object might not be detected unless more than 60% of the person is visible.

Dark-lighting, Monochrome or Infrared Camera Images

The PeopleNet model were trained on RGB images in good lighting conditions. Therefore, images captured in dark lighting conditions or a monochrome image or IR camera image may not provide good detection results.

Warped and Blurry Images

The PeopleNet models were not trained on fish-eye lense cameras or moving cameras. Therefore, the models may not perform well for warped images and images that have motion-induced or other blur.

Face and Bag class

Although bag and face class are included in the model, the accuracy of these classes will be much lower than people class. Some re-training will be required on these classes to improve accuracy.

Using TLT Pre-trained Models

License

License to use these models is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses.

Technical blogs

Suggested reading

Ethical AI

NVIDIA PeopleNet model detects faces. However, no additional information such as race, gender, and skin type about the faces is inferred.

Training and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.