The model described in this card detects one or more “person” objects within an image and returns a box around each object, as well as a segmentation mask for each object.
This model is based on MaskRCNN with ResNet50 as its feature extractor. MaskRCNN is a widely adopted two-stage architecture, which uses Region Proposal Network (RPN) to generate object proposals and various prediction heads to predict object categories, refine bounding boxes and generate instance masks.
The training algorithm optimizes the network to minimize the mask, localization and confidence loss for the objects.
PeopleSegNet v1.0 model was trained on a proprietary dataset with more than 5 million objects for person class. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV). Approximately half of the training data consisted of images captured in an indoor office environment. For this case, the camera is typically set up at approximately 10 feet height, 45-degree angle and has close field-of-view. This content was chosen to improve accuracy of the models for convenience-store retail analytics use-case.
Object | ||
---|---|---|
Environment | Images | Persons |
5ft Indoor | 108,692 | 1,060,960 |
5ft Outdoor | 206,912 | 166,8250 |
10ft Indoor (Office close FOV) | 413,270 | 4,577,870 |
10ft Outdoor | 18,321 | 178,817 |
20ft Indoor | 104,972 | 1,079,550 |
20ft Outdoor | 24,783 | 59,623 |
Total | 876,950 | 8,625,070 |
The training dataset is created by labeling ground-truth bounding-boxes and categories by human labellers. Following guidelines were used while labelling the training data for NVIDIA PeopleSegNet model. If you are looking to re-train with your own dataset, please follow the guideline below for highest accuracy.
PeopleSegNet project labelling guidelines:
All objects that fall under one of the three classes (person, face, bag) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.
If a person is carrying an object please mark the bounding-box to include the carried object as long as it doesn’t affect the silhouette of the person. For example, exclude a rolling bag if they are pulling it behind them and are distinctly visible as separate object. But include a backpack, purse etc. that do not alter the silhouette of the pedestrian significantly.
Occlusion: For partially occluded objects that do not belong a person class and are visible approximately 60% or are marked as visible objects with bounding box around visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.
Occlusion for person class: If an occluded person’s head and shoulders are visible and the visible height is approximately 20% or more, then these objects are marked by the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Occlusion guidelines in item 3 above.
Truncation: For an object other than a person that is at the edge of the frame with visibility of 60% or more visible are marked with the truncation flag for the object.
Truncation for person class: If a truncated person’s head and shoulders are visible and the visible height is approximately 20% or more mark the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Truncation guidelines in item 5 above.
Each frame is not required to have an object.
The inference performance of PeopleSegNet v1.0 model was measured against 42000 proprietary images across a variety of environments. The frames are high resolution images 1920x1080 pixels resized to 960x576 pixels before passing to the PeopleSegNet model.
The true positives, false positives, false negatives are calculated using intersection-over-union (IOU) criterion greater than 0.5. The KPI for the evaluation data are reported in the table below. Model is evaluated based on precision, recall and accuracy.
Model | ResNet 50 | ||
---|---|---|---|
Content | Precision | Recall | Accuracy |
5ft | 93.69 | 90.36 | 85.45 |
10ft | 96.13 | 76.22 | 73.95 |
20ft | 97.58 | 91.88 | 90.52 |
Office use-case | 88.31 | 94.52 | 86.00 |
The inference is run on the provided pruned models at INT8 precision. On the Jetson Nano FP16 precision is used. The inference performance is run using trtexec
on Jetson Nano, AGX Xavier, Xavier NX and NVIDIA T4 GPU. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.
Platform | FPS |
---|---|
Nano | 0.6 |
Xavier NX | 8.5 |
AGX Xavier | 12.2 |
T4 | 40 |
These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.
Primary use case intended for the model is detecting and segmenting people in a color (RGB) image. The model can be used to detect and segment people from photos and videos by using appropriate video or image decoding and pre-processing.
The model is intended for training using Transfer Learning Toolkit with the user's own dataset or using it as it is. This can provide high fidelity models that are adapted to the use case. The Jupyter notebook available as a part of TLT container can be used to re-train.
The model is encrypted and will only operate with the following key:
nvidia_tlt
Please make sure to use this as the key for all TAO commands that require a model load key.
Color Images of resolution 960 X 576 X 3
Category label (person), bounding-box coordinates and segmentation mask for each detected person in the input image.
In order, to use these models as a pretrained weights for transfer learning, please use the snippet below as template for the maskrcnn_config
component of the experiment spec file to train a MaskRCNN model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.
maskrcnn_config {
nlayers: 50
arch: "resnet"
gt_mask_size: 112
freeze_blocks: "[0]"
freeze_bn: True
# Region Proposal Network
rpn_positive_overlap: 0.7
rpn_negative_overlap: 0.3
rpn_batch_size_per_im: 256
rpn_fg_fraction: 0.5
rpn_min_size: 0.
# Proposal layer.
batch_size_per_im: 512
fg_fraction: 0.25
fg_thresh: 0.5
bg_thresh_hi: 0.5
bg_thresh_lo: 0.
# Faster-RCNN heads.
fast_rcnn_mlp_head_dim: 1024
bbox_reg_weights: "(10., 10., 5., 5.)"
# Mask-RCNN heads.
include_mask: True
mrcnn_resolution: 28
# training
train_rpn_pre_nms_topn: 2000
train_rpn_post_nms_topn: 1000
train_rpn_nms_threshold: 0.7
# evaluation
test_detections_per_image: 100
test_nms: 0.5
test_rpn_pre_nms_topn: 1000
test_rpn_post_nms_topn: 1000
test_rpn_nms_thresh: 0.7
# model architecture
min_level: 2
max_level: 6
num_scales: 1
aspect_ratios: "[(1.0, 1.0), (1.4, 0.7), (0.7, 1.4)]"
anchor_scale: 8
# localization loss
rpn_box_loss_weight: 1.0
fast_rcnn_box_loss_weight: 1.0
mrcnn_weight_loss_mask: 1.0
}
To create the entire end-to-end video analytic application, deploy these models with DeepStream SDK. DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. DeepStream supports direct integration of these models into the deepstream sample app.
To deploy these models with DeepStream 5.1, please follow the instructions below:
Download and install DeepStream SDK. The installation instructions for DeepStream are provided in DeepStream development guide. The config files for the purpose-built models are located in:
/opt/nvidia/deepstream
is the default DeepStream installation directory. This path will be different if you are installing in a different directory.
You will need 2 config files and 1 label file. These files are provided in NVIDIA-AI-IOT.
deepstream_app_source1_peoplesegnet.txt - Main config file for DeepStream app
pgie_peopleSegNet_tao_config.txt - File to configure inference settings
peopleSegNet_labels.txt - Label file with 1 class
Key Parameters in pgie_peopleSegNet_tao_config.txt
tlt-model-key=nvidia_tlt
tlt-encoded-model=../../models/peopleSegNet/peopleSegNet_resnet50.etlt
model-engine-file=../../models/peopleSegNet/peopleSegNet_resnet50.etlt_b1_gpu0_fp16.engine
network-type=3 ## 3 is for instance segmentation network
labelfile-path=./peopleSegNet_labels.txt
int8-calib-file=../../models/peopleSegNet/cal.bin
infer-dims=3;576;960
num-detected-classes=2
Run deepstream-app
:
deepstream-app -c deepstream_app_source1_peoplesegnet.txt
Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.
NVIDIA PeopleSegNet model were trained to detect objects larger than 10x10 pixels. Therefore it may not be able to detect objects that are smaller than 10x10 pixels.
When objects are occluded or truncated such that less than 20% of the object is visible, they may not be detected by the PeopleSegNet model. For people class objects, the model will detect occluded people as long as head and shoulders are visible. However if the person’s head and/or shoulders are not visible, the object might not be detected unless more than 60% of the person is visible.
The PeopleSegNet model were trained on RGB images in good lighting conditions. Therefore, images captured in dark lighting conditions or a monochrome image or IR camera image may not provide good detection results.
The PeopleSegNet models were not trained on fish-eye lense cameras or moving cameras. Therefore, the models may not perform well for warped images and images that have motion-induced or other blur.
Although bag and face class are included in the model, the accuracy of these classes will be much lower than people class. Some re-training will be required on these classes to improve accuracy.
License to use these models is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses.
Training and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.