NGC Catalog
CLASSIC
Welcome Guest
Models
PeopleSemSegNet AMR

PeopleSemSegNet AMR

For downloads and more information, please view on a desktop device.
Logo for PeopleSemSegNet AMR
Description
Semantic segmentation of persons in an image.
Publisher
-
Latest Version
deployable_v1.1
Modified
August 19, 2024
Size
3.71 MB

PeopleSemSegNet AMR Model Card

Model Overview

The PeopleSemSegNet Autonomous Mobile Robot (AMR) model detects one or more “person” object within an image and returns a semantic segmentation mask for all people within an image. This model is ready for commercial use.

References

Citations

  • Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: CVPR. (2016)
  • Erhan, D., Szegedy, C., Toshev, A., Anguelov, D.: Scalable object detection using deep neural networks, In: CVPR. (2014)
  • He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. In: CVPR (2015)

Using TAO Pre-trained Models

  • Get TAO Container
  • Get other Purpose-built models from NGC model registry:
    • PeopleNet
    • TrafficCamNet
    • FaceDetectIR
    • VehicleMakeNet
    • VehicleTypeNet

Model Architecture

Architecture Type: Convolution Neural Network (CNN)
Network Architecture: UNet

UNet is a widely adopted network for performing semantic segmentation, which has applications in autonomous vehicles, industries, smart cities, etc. UNet is a fully convolutional network with an encoder that is comprised of convolutional layers and a decoder that is comprised of transposed convolutions or upsampling layers. It then predicts a class label for every pixel in the input image.

Input:

Input Type(s): Images
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: 4D
Other Properties Related to Input: RGB Fixed Resolution: 960 X 544 X 3 (W x H x C) Channel Ordering of the Input: NCHW, where N = Batch Size, C = number of channels (3), H = Height of images (544), W = Width of the images (960) Input scale: 1/255.0 Mean subtraction: None; No minimum bit depth, alpha, or gamma.

Input image

Output:

Output Type(s): Label(s), Segmentation Mask, Confidence Scores
Output Format: Label: Text String(s); Segmentation Mask, Confidence Scores: Floating Point
Other Properties Related to Output: Category Label(s): Bag, Face, Person, Segmentation Mask, Confidence Scores

Output image

Software Integration:

Runtime Engine(s):

  • TAO 5.1

Supported Hardware Architecture Compatibility:

  • Ampere
  • Jetson
  • Hopper
  • Lovelace
  • Pascal
  • Turing

Preferred Operating System(s):

  • Linux
  • Linux 4 Tegra

Model versions

  • deployable_v1.0
  • trainable_v1.0

Training

  • Data Collection Method by dataset
    • Automatic/Sensors
  • Labeling Method by dataset:
    • Human

This model was trained using the DetectNet_v2 entrypoint in TAO. The training algorithm optimizes the network to minimize the localization and confidence loss for the objects. The training is carried out in two phases. In the first phase, the network is trained without regularization.

Training Data Properties

Internal, proprietary dataset with more than 5 million objects for person class. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV). Approximately half of the training data consisted of images captured in an indoor office environment. For this case, the camera is typically set up at approximately 10 feet height, 45-degree angle and has close field-of-view. This content was chosen to improve accuracy of the models for extended arms pose of people. We have also added approximately 45 thousand images with low-density scenes from a robot's point of view to improve the performance for use-cases where person object detection is needed at low heights.

Object
Environment Images Persons
5ft Indoor 108,692 1,060,960
5ft Outdoor 206,912 166,8250
10ft Indoor (Office close FOV) 413,270 4,577,870
10ft Outdoor 18,321 178,817
20ft Indoor 104,972 1,079,550
20ft Outdoor 24,783 59,623
Robotics Subset 43076 160806
Total 920,026 8,785,876

Training Data Ground-truth Labeling Guidelines

The training dataset is created by labeling ground-truth bounding-boxes and categories by human labellers. Following guidelines were used while labelling the training data for NVIDIA PeopleSemSegNet AMR model. If you are looking to re-train with your own dataset, please follow the guideline below for highest accuracy.

PeopleSemSegNet AMR project labelling guidelines:

  1. All objects that fall under one of the three classes (person, face, bag) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.

  2. If a person is carrying an object please mark the bounding-box to include the carried object as long as it doesn’t affect the silhouette of the person. For example, exclude a rolling bag if they are pulling it behind them and are distinctly visible as separate object. But include a backpack, purse etc. that do not alter the silhouette of the pedestrian significantly.

  3. Occlusion: For partially occluded objects that do not belong a person class and are visible approximately 60% or are marked as visible objects with bounding box around visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.

  4. Occlusion for person class: If an occluded person’s head and shoulders are visible and the visible height is approximately 20% or more, then these objects are marked by the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Occlusion guidelines in item 3 above.

  5. Truncation: For an object other than a person that is at the edge of the frame with visibility of 60% or more visible are marked with the truncation flag for the object.

  6. Truncation for person class: If a truncated person’s head and shoulders are visible and the visible height is approximately 20% or more mark the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Truncation guidelines in item 5 above.

  7. Each frame is not required to have an object.

  8. The segmentation masks were labeled using NVIDIA internal auto-labeling tool

Performance

Evaluation Data Properties

  • Data Collection Method by dataset
    • Automatic/Sensors
  • Labeling Method by dataset:
    • Human

5000 proprietary images across a variety of environments from a robot's point of view.

Methodology and KPI

The KPI for the evaluation data are reported in the table below. Model is evaluated based on Mean Intersection-Over-Union. Mean Intersection-Over-Union (MIOU) is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes.

Model PeopleSemSegNet AMR
Content MIOU
Robot's point of view 87

Real-time Inference Performance

The inference is run on the provided unpruned models at INT8 precision. On the Jetson Nano FP16 precision is used. The inference performance is run using trtexec on Jetson Nano, AGX Xavier, Xavier NX and NVIDIA T4 GPU. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.

BS - Batch Size

Models BS Xavier Nx GPU FPS BS AGX Xavier GPU FPS BS Orin NX GPU FPS BS Orin GPU FPS BS T4 GPU FPS BS A100 GPU FPS BS A30 GPU FPS BS A10 GPU FPS BS A2 GPU FPS
PeopleSemSegNet AMR 16 199 16 356 16 289 32 703 64 1027.85 64 5745.79 64 2862.76 64 2429.62 16 631.31

How to use this model

These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.

The model is intended for training using TAO Toolkit with the user's own dataset or using it as it is. This can provide high fidelity models that are adapted to the use case. The Jupyter notebook available as a part of TAO container can be used to re-train.

Primary use case intended for the model is segmenting people in a color (RGB) image. The model can be used to segment people from photos and videos by using appropriate video or image decoding and pre-processing. Note this model performs semantic segmentation and not instance based segmentation.

The model is encrypted and will only operate with the following key:

  • Model load key: tlt_encode

Please make sure to use this as the key for all TAO commands that require a model load key.

Instructions to use unpruned model with TAO

In order, to use these models as a pretrained weights for transfer learning, please use the snippet below as template for the model_config component of the experiment spec file to train a UNet model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

Model Config

model_config {
num_layers: 18
model_input_width: 960
model_input_height: 544
model_input_channels: 3
all_projections: true
arch: "shufflenet"
use_batch_norm: true
training_precision {
  backend_floatx: FLOAT32
}
}

Use the following dataset config class parameters apart from the train_data_sources, val_data_sources, test_data_sources. Please note that these are the default parameters used to generate the segmentation for the inferred image above. Please refer to TAO Toolkit User Guide to experiment the resize_method and resize_padding arguments to achieve the highest quality of mask on your dataset.

``py dataset: "custom" augment: False input_image_type: "color" resize_padding: True resize_method: "NEAREST_NEIGHBOR"


Use the following for mapping the classes to the label id predicted. Person class is represented by id 1 and background is represented by id 0. Example `data_class_config` to be used for train/ evaluate/ inference in the experiment spec is as follows:  

```py
data_class_config {
  target_classes {
    name: "person"
    mapping_class: "person"
    label_id: 1
  }
  target_classes {
    name: "background"
    mapping_class: "background"
    label_id: 0
  }
}

Technical blogs

  • Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
  • Learn how to train real-time License plate detection and recognition app with TAO and DeepStream.
  • Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO
  • Learn how to train Instance segmentation model using MaskRCNN with TAO
  • Learn how to improve INT8 accuracy using Quantization aware training(QAT) with TAO
  • Read the technical tutorial on how PeopleNet model can be trained with custom data using Transfer Learning Toolkit

Suggested reading

  • More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
  • TAO documentation
  • Read the TAO getting Started guide and release notes.
  • If you have any questions or feedback, please refer to the discussions on TAO Toolkit Developer Forums

Ethical Considerations

NVIDIA PeopleNet AMR model detects faces. However, no additional information such as race, gender, and skin type about the faces is inferred.

Training and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.