PeopleSemSegNet AMR

PeopleSemSegNet AMR

Logo for PeopleSemSegNet AMR
Description
Semantic segmentation of persons in an image.
Publisher
-
Latest Version
v1.0
Modified
October 19, 2023
Size
30.31 MB

PeopleSemSegNet AMR Model Card

Model Overview

The model described in this card segments one or more “person” object within an image and returns a semantic segmentation mask for all people within an image.

Model Architecture

UNet is a widely adopted network for performing semantic segmentation, which has applications in autonomous vehicles, industries, smart cities, etc. UNet is a fully convolutional network with an encoder that is comprised of convolutional layers and a decoder that is comprised of transposed convolutions or upsampling layers. It then predicts a class label for every pixel in the input image.

We provide a low complexity model of the classical UNet for PeopleSemSegNet AMR:

Shuffleseg (Low complexity)

Training Algorithm

The training algorithm optimizes the network to minimize the cross-entropy loss for every pixel of the mask.

Training Data

PeopleSemSegNet AMR model was trained on a proprietary dataset with more than 5 million objects for person class. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV). Approximately half of the training data consisted of images captured in an indoor office environment. For this case, the camera is typically set up at approximately 10 feet height, 45-degree angle and has close field-of-view. This content was chosen to improve accuracy of the models for extended arms pose of people. We have also added approximately 45 thousand images with low-density scenes from a robot's point of view to improve the performance for use-cases where person object detection is needed at low heights.

Object
Environment Images Persons
5ft Indoor 108,692 1,060,960
5ft Outdoor 206,912 166,8250
10ft Indoor (Office close FOV) 413,270 4,577,870
10ft Outdoor 18,321 178,817
20ft Indoor 104,972 1,079,550
20ft Outdoor 24,783 59,623
Robotics Subset 43076 160806
Total 920,026 8,785,876

Training Data Ground-truth Labeling Guidelines

The training dataset is created by labeling ground-truth bounding-boxes and categories by human labellers. Following guidelines were used while labelling the training data for NVIDIA PeopleSemSegNet AMR model. If you are looking to re-train with your own dataset, please follow the guideline below for highest accuracy.

PeopleSemSegNet AMR project labelling guidelines:

  1. All objects that fall under one of the three classes (person, face, bag) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.

  2. If a person is carrying an object please mark the bounding-box to include the carried object as long as it doesn’t affect the silhouette of the person. For example, exclude a rolling bag if they are pulling it behind them and are distinctly visible as separate object. But include a backpack, purse etc. that do not alter the silhouette of the pedestrian significantly.

  3. Occlusion: For partially occluded objects that do not belong a person class and are visible approximately 60% or are marked as visible objects with bounding box around visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.

  4. Occlusion for person class: If an occluded person’s head and shoulders are visible and the visible height is approximately 20% or more, then these objects are marked by the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Occlusion guidelines in item 3 above.

  5. Truncation: For an object other than a person that is at the edge of the frame with visibility of 60% or more visible are marked with the truncation flag for the object.

  6. Truncation for person class: If a truncated person’s head and shoulders are visible and the visible height is approximately 20% or more mark the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Truncation guidelines in item 5 above.

  7. Each frame is not required to have an object.

  8. The segmentation masks were labeled using NVIDIA internal auto-labeling tool

Performance

Evaluation Data

The inference performance of PeopleSemSegNet AMR model was measured against 5000 proprietary images across a variety of environments from a robot's point of view. The frames are high resolution images 1920x1080 pixels resized to 960x544 pixels before passing to the PeopleSemSegNet AMR segmentation model.

Methodology and KPI

The KPI for the evaluation data are reported in the table below. Model is evaluated based on Mean Intersection-Over-Union. Mean Intersection-Over-Union (MIOU) is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes.

Model PeopleSemSegNet AMR
Content MIOU
Robot's point of view 87

Real-time Inference Performance

The inference is run on the provided unpruned models at INT8 precision. On the Jetson Nano FP16 precision is used. The inference performance is run using trtexec on Jetson Nano, AGX Xavier, Xavier NX and NVIDIA T4 GPU. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.

BS - Batch Size

Models BS Xavier Nx GPU FPS BS AGX Xavier GPU FPS BS Orin NX GPU FPS BS Orin GPU FPS BS T4 GPU FPS BS A100 GPU FPS BS A30 GPU FPS BS A10 GPU FPS BS A2 GPU FPS
PeopleSemSegNet AMR 16 199 16 356 16 289 32 703 64 1027.85 64 5745.79 64 2862.76 64 2429.62 16 631.31

How to use this model

These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.

The model is intended for training using TAO Toolkit with the user's own dataset or using it as it is. This can provide high fidelity models that are adapted to the use case. The Jupyter notebook available as a part of TAO container can be used to re-train.

Primary use case intended for the model is segmenting people in a color (RGB) image. The model can be used to segment people from photos and videos by using appropriate video or image decoding and pre-processing. Note this model performs semantic segmentation and not instance based segmentation.

The model is encrypted and will only operate with the following key:

  • Model load key: tlt_encode

Please make sure to use this as the key for all TAO commands that require a model load key.

Input

Color Images of resolution 960 X 544 X 3

Output

Category label (person or background) for every pixel in the input image. Outputs a semantic of people for the input image.

Input image

Output image

Instructions to use unpruned model with TAO

In order, to use these models as a pretrained weights for transfer learning, please use the snippet below as template for the model_config component of the experiment spec file to train a UNet model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

Model Config
model_config {
num_layers: 18
model_input_width: 960
model_input_height: 544
model_input_channels: 3
all_projections: true
arch: "shufflenet"
use_batch_norm: true
training_precision {
  backend_floatx: FLOAT32
}
}

Use the following dataset config class parameters apart from the train_data_sources, val_data_sources, test_data_sources. Please note that these are the default parameters used to generate the segmentation for the inferred image above. Please refer to TAO Toolkit User Guide to experiment the resize_method and resize_padding arguments to achieve the highest quality of mask on your dataset.

``py dataset: "custom" augment: False input_image_type: "color" resize_padding: True resize_method: "NEAREST_NEIGHBOR"


Use the following for mapping the classes to the label id predicted. Person class is represented by id 1 and background is represented by id 0. Example `data_class_config` to be used for train/ evaluate/ inference in the experiment spec is as follows:  

```py
data_class_config {
  target_classes {
    name: "person"
    mapping_class: "person"
    label_id: 1
  }
  target_classes {
    name: "background"
    mapping_class: "background"
    label_id: 0
  }
}

Limitations

Very Small Objects

NVIDIA PeopleSemSegNet AMR model were trained to detect objects larger than 10x10 pixels. Therefore it may not be able to detect objects that are smaller than 10x10 pixels.

Occluded Objects

When objects are occluded or truncated such that less than 20% of the object is visible, they may not be detected by the PeopleSemSegNet AMR model. For people class objects, the model will detect occluded people as long as head and shoulders are visible. However if the person’s head and/or shoulders are not visible, the object might not be detected unless more than 60% of the person is visible.

Dark-lighting, Monochrome or Infrared Camera Images

The PeopleSemSegNet AMR model were trained on RGB images in good lighting conditions. Therefore, images captured in dark lighting conditions or a monochrome image or IR camera image may not provide good detection results.

Warped and Blurry Images

The PeopleSemSegNet AMR models were not trained on fish-eye lense cameras or moving cameras. Therefore, the models may not perform well for warped images and images that have motion-induced or other blur.

Face and Bag class

Although bag and face class are not included currently in the segmentation model.

Model versions:

PeopleSemSegNet AMR:
  • trainable_peoplesemsegnet_amr_v1.0 - Shuffleseg Unet Dynamic based pre-trained model.

This version of the model was particularly trained for segmenting people from a camera placed lower to the ground.

References

Citations

  • Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.
  • Gamal, Mostafa, Mennatullah Siam, and Moemen Abdel-Razek. "Shuffleseg: Real-time semantic segmentation network." arXiv preprint arXiv:1803.03816 (2018).

Using TAO Pre-trained Models

Technical blogs

Suggested reading

License

License to use this model is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses

Ethical Considerations

Training and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.