NGC Catalog
CLASSIC
Welcome Guest
Models
PeopleSemSegnet

PeopleSemSegnet

For downloads and more information, please view on a desktop device.
Logo for PeopleSemSegnet
Description
Semantic segmentation of persons in an image.
Publisher
NVIDIA
Latest Version
deployable_shuffleseg_unet_onnx_v1.0.1
Modified
November 27, 2024
Size
3.73 MB

PeopleSemSegNet Model Card

Description:

PeopleSemSegNet detects persons in an image. This model is ready for commercial use.

References:

Citations

  • Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.
  • Gamal, Mostafa, Mennatullah Siam, and Moemen Abdel-Razek. "Shuffleseg: Real-time semantic segmentation network." arXiv preprint arXiv:1803.03816 (2018).

Using TAO Pre-trained Models

  • Get TAO Container
  • Get other purpose-built models from the NGC model registry:
    • TrafficCamNet
    • PeopleNet
    • PeopleNet-Transformer
    • DashCamNet
    • FaceDetectIR
    • VehicleMakeNet
    • VehicleTypeNet
    • PeopleSegNet
    • PeopleSemSegNet
    • License Plate Detection
    • License Plate Recognition
    • PoseClassificationNet
    • Facial Landmark
    • FaceDetect
    • 2D Body Pose Estimation
    • ActionRecognitionNet
    • People ReIdentification
    • PointPillarNet
    • CitySegFormer
    • Retail Object Detection
    • Retail Object Embedding
    • Optical Inspection
    • Optical Character Detection
    • Optical Character Recognition
    • PCB Classification
    • PeopleSemSegFormer

Model Architecture:

Architecture Type: Convolution Neural Network (CNN)
Network Architecture: U-Net

Input:

Input Type(s): Images
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: 3D
Other Properties Related to Input: RGB Fixed Resolution: 960 X 544 X 3 (W x H x C); No minimum bit depth, alpha, or gamma.

Output:

Output Type(s): Label(s), Semantic Segmentation Mask
Output Format: Label: Text String(s); Segmentation Mask: 2D
Other Properties Related to Output: Category Label(s): (person or background), Segmentation Mask

Software Integration:

Runtime Engine(s):

  • TAO - 5.2
  • DeepStream - 6.1 or later

Supported Hardware Architecture(s):

  • Ampere
  • Jetson
  • Hopper
  • Lovelace
  • Pascal
  • Turing
  • Volta

Supported Operating System(s):

  • Linux
  • Linux 4 Tegra

Model Version(s):

Vanilla UNet Dynamic:

  • trainable_vanilla_unet_v1.0 - Vanilla Unet Dynamic based pre-trained model.
  • deployable_vanilla_unet_v1.0 - Vanilla Unet Dynamic model deployable to deepstream.

There are two models for the deployable version:

  • peoplesemsegnet_vanilla_unet_dynamic_etlt_fp32.etlt : FP32 inference
  • peoplesemsegnet_vanilla_unet_dynamic_etlt_int8_fp16.etlt : FP16/ INT8 inference

The calibration cache for INT8 PTQ Vanilla UNet Dynamic UNet has been released.

This version of the model was particularly trained on extended arms people content.

ShuffleSeg UNet:

  • trainable_shuffleseg_unet_v1.0 - Shuffleseg Unet Dynamic based pre-trained model.
  • deployable_shuffleseg_unet_v1.0 - Shuffleseg UNet model deployable to deepstream. The calibration cache for INT8 PTQ ShuffleSeg UNet has been released.

Training & Evaluation:

Training Dataset:

Data Collection Method by dataset:

  • Automatic/Sensors

Labeling Method by dataset:

  • Human

Properties:
Proprietary dataset with more than 5 million people. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV) with multiple camera types. Approximately half of the training data consisted of images captured in an indoor office environment.

Object
Environment Images Persons
5ft Indoor 108,692 1,060,960
5ft Outdoor 206,912 166,8250
10ft Indoor (Office close FOV) 413,270 4,577,870
10ft Outdoor 18,321 178,817
20ft Indoor 104,972 1,079,550
20ft Outdoor 24,783 59,623
Total 876,950 8,625,070

Training Data Ground-truth Labeling Guidelines

The training dataset is created by labeling ground-truth bounding-boxes and categories by human labellers. Following guidelines were used while labelling the training data for NVIDIA PeopleSemSegNet model. If you are looking to re-train with your own dataset, please follow the guideline below for highest accuracy.

PeopleSemSegNet project labelling guidelines:

  1. All objects that fall under one of the three classes (person, face, bag) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.

  2. If a person is carrying an object please mark the bounding-box to include the carried object as long as it doesn’t affect the silhouette of the person. For example, exclude a rolling bag if they are pulling it behind them and are distinctly visible as separate object. But include a backpack, purse etc. that do not alter the silhouette of the pedestrian significantly.

  3. Occlusion: For partially occluded objects that do not belong a person class and are visible approximately 60% or are marked as visible objects with bounding box around visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.

  4. Occlusion for person class: If an occluded person’s head and shoulders are visible and the visible height is approximately 20% or more, then these objects are marked by the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Occlusion guidelines in item 3 above.

  5. Truncation: For an object other than a person that is at the edge of the frame with visibility of 60% or more visible are marked with the truncation flag for the object.

  6. Truncation for person class: If a truncated person’s head and shoulders are visible and the visible height is approximately 20% or more mark the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Truncation guidelines in item 5 above.

  7. Each frame is not required to have an object.

  8. The segmentation masks were labeled using NVIDIA internal auto-labeling tool

Evaluation Dataset:

Data Collection Method by dataset:

  • Automatic/Sensors

Labeling Method by dataset:

  • Human

Properties:
50000 proprietary images across a variety of environments with multiple camera types.. The frames are high resolution images 1920x1080 pixels resized to 960x544 pixels.

Methodology and KPI

The KPI for the evaluation data are reported in the table below. Model is evaluated based on Mean Intersection-Over-Union. Mean Intersection-Over-Union (MIOU) is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes.

Model Vanilla Unet Dynamic
Content MIOU
5ft 91.86
10ft 91
20ft 89.7
Office use-case 95.01
Model ShuffleSeg
Content MIOU
5ft 89
10ft 87
20ft 84
Office use-case 87

Inference:

Engine: Tensor(RT)
Test Hardware:

  • Jetson AGX Xavier
  • Xavier NX
  • Orin
  • Orin NX
  • NVIDIA T4
  • Ampere GPU
  • A2
  • A30
  • L4
  • T4
  • DGX H100
  • DGX A100
  • DGX H100
  • L40
  • JAO 64GB
  • Orin NX16GB
  • Orin Nano 8GB

The inference is run on the provided unpruned models at INT8 precision. On the Jetson Nano FP16 precision is used. The inference performance is run using trtexec on Jetson Nano, AGX Xavier, Xavier NX and NVIDIA T4 GPU. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.

Models BS Xavier Nx GPU FPS BS AGX Xavier GPU FPS BS Orin NX GPU FPS BS Orin GPU FPS
ShuffleSeg 16 199 16 356 16 289 32 703
VanillaUnet Dynamic 4 15 4 25 4 27 4 75
Models BS T4 GPU FPS BS A100 GPU FPS BS A30 GPU FPS BS A10 GPU FPS BS A2 GPU FPS
ShuffleSeg 64 1027.85 64 5745.79 64 2862.76 64 2429.62 16 631.31
VanillaUnet Dynamic 16 79.08 16 496.34 16 253.77 16 180.04 16 44.09

How to use this model

These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.

The model is intended for training using TAO Toolkit with the user's own dataset or using it as it is. This can provide high fidelity models that are adapted to the use case. The Jupyter notebook available as a part of TAO container can be used to re-train.

Primary use case intended for the model is segmenting people in a color (RGB) image. The model can be used to segment people from photos and videos by using appropriate video or image decoding and pre-processing. Note this model performs semantic segmentation and not instance based segmentation.

The model is encrypted and will only operate with the following key:

  • Model load key: tlt_encode

Please make sure to use this as the key for all TAO commands that require a model load key.

Instructions to use unpruned model with TAO

In order, to use these models as a pretrained weights for transfer learning, please use the snippet below as template for the model_config component of the experiment spec file to train a UNet model. For more information on the experiment spec file, please refer to the TAO Toolkit User Guide.

Model Config for Vanilla UNet Dynamic

model_config {
num_layers: 18
model_input_width: 960
model_input_height: 544
model_input_channels: 3
all_projections: true
arch: "vanilla_unet_dynamic"
use_batch_norm: true
training_precision {
  backend_floatx: FLOAT32
}
}

Model Config for ShuffleSeg

model_config {
num_layers: 18
model_input_width: 960
model_input_height: 544
model_input_channels: 3
all_projections: true
arch: "shufflenet"
use_batch_norm: true
training_precision {
  backend_floatx: FLOAT32
}
}

Use the following dataset config class parameters apart from the train_data_sources, val_data_sources, test_data_sources. Please note that these are the default parameters used to generate the segmentation for the inferred image above. Please refer to TAO Toolkit User Guide to experiment the resize_method and resize_padding arguments to achieve the highest quality of mask on your dataset.

``py dataset: "custom" augment: False input_image_type: "color" resize_padding: True resize_method: "NEAREST_NEIGHBOR"


Use the following for mapping the classes to the label id predicted. Person class is represented by id 1 and background is represented by id 0. Example `data_class_config` to be used for train/ evaluate/ inference in the experiment spec is as follows:  

```py
data_class_config {
  target_classes {
    name: "person"
    mapping_class: "person"
    label_id: 1
  }
  target_classes {
    name: "background"
    mapping_class: "background"
    label_id: 0
  }
}

Instructions to deploy these models with DeepStream

To create the entire end-to-end video analytics application, deploy these models with DeepStream SDK. DeepStream SDK is a streaming analytics toolkit to accelerate building AI-based video analytics applications. DeepStream supports direct integration of these models into the deepstream sample app.

To deploy these models with DeepStream 6.1, please follow the instructions below:

Download and install DeepStream SDK. The installation instructions for DeepStream are provided in DeepStream development guide. The config files for the purpose-built models are located in:

/opt/nvidia/deepstream is the default DeepStream installation directory. This path will be different if you are installing in a different directory.

You will need 1 config files and 1 label file. These files are provided in NVIDIA-AI-IOT.

pgie_unet_tlt_config_peoplesemsegnet_shuffleseg.txt - File to configure inference settings for ShuffleSeg
pgie_unet_tlt_config_peoplesemsegnet_vanilla_unet_dynamic.txt - File to configure inference settings for Vanilla Unet Dynamic

labels.txt - Label file with 2 classes

Convert the .etlt file to engine if you want to input the model as TRT engine. Otherwise, you can input the etlt model directly to Deepstream. In order to manually convert to TRT engine, follow the example command below:

ShuffleSeg

FP16

./tao-converter -k tlt_encode -p input_2:0,1x3x544x960,1x3x544x960,1x3x544x960 -t fp16 -e ./bs1_fp16.engine ./peoplesemsegnet_shuffleseg_etlt.etlt

INT8

./tao-converter -k tlt_encode -p input_2:0,1x3x544x960,1x3x544x960,1x3x544x960 -t int8 -e ./bs1_int8.engine -c ./peoplesemsegnet_shuffleseg_cache.txt ./peoplesemsegnet_shuffleseg_etlt.etlt

VanillaUnetDynamic

FP32

./tao-converter -k tlt_encode -p input_1:0,1x3x544x960,1x3x544x960,1x3x544x960 -t fp16 -e ./bs1_fp32.engine ./peoplesemsegnet_vanilla_unet_dynamic_etlt_fp32.etlt

FP16

./tao-converter -k tlt_encode -p input_1:0,1x3x544x960,1x3x544x960,1x3x544x960 -t fp16 -e ./bs1_fp16.engine ./peoplesemsegnet_vanilla_unet_dynamic_etlt_int8_fp16.etlt

INT8

./tao-converter -k tlt_encode -p input_1:0,1x3x544x960,1x3x544x960,1x3x544x960 -t int8 -e ./bs1_int8.engine -c ./peoplesemsegnet_vanilla_unet_dynamic_etlt_int8.cache peoplesemsegnet_vanilla_unet_dynamic_etlt_int8_fp16.etlt

Key Parameters in pgie_unet_tao_config_peoplesemsegnet_vanilla_unet_dynamic.txt and pgie_unet_tao_config_peoplesemsegnet_shuffleseg.txt

# You can either provide the etlt model and key or trt engine obtained by using tao-converter
tlt-model-key=tlt_encode
# tlt-encoded-model=../../path/to/.etlt file
model-engine-file=../../path/to/trt_engine
network-type=100
network-mode=2
labelfile-path=/path/to/labels.txt

# Uncomment below if you want to use etlt file instead of engine
# int8-calib-file=/path/to/calibration cache text file
infer-dims=3;544;960
batch-size=1
num-detected-classes=2
segmentation-output-order=1
segmentation-threshold=0.0
output-tensor-meta=1
model-color-format=1 # BGR pre-processing

Run ds-tao-segmentation:

VanillaUnetDynamic

./apps/tao_segmentation/ds-tao-segmentation -c configs/unet_tao/pgie_unet_tao_config_peoplesemsegnet_vanilla_unet_dynamic.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264

ShuffleSeg UNet

./apps/tao_segmentation/ds-tao-segmentation -c configs/unet_tao/pgie_unet_tlt_config_peoplesemsegnet_shuffleseg.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264

Documentation to deploy with DeepStream is provided in "Deploying to DeepStream" chapter of TAO User Guide.

Technical blogs

  • Access the latest in Vision AI development workflows with NVIDIA TAO Toolkit 5.0
  • Improve accuracy and robustness of vision ai models with vision transformers and NVIDIA TAO
  • Train like a ‘pro’ without being an AI expert using TAO AutoML
  • Create Custom AI models using NVIDIA TAO Toolkit with Azure Machine Learning
  • Developing and Deploying AI-powered Robots with NVIDIA Isaac Sim and NVIDIA TAO
  • Learn endless ways to adapt and supercharge your AI workflows with TAO - Whitepaper
  • Customize Action Recognition with TAO and deploy with DeepStream
  • Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
  • Learn how to train real-time License plate detection and recognition app with TAO and DeepStream.
  • Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO

Suggested reading

  • More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
  • TAO documentation
  • Read the TAO getting Started guide and release notes.
  • If you have any questions or feedback, please refer to the discussions on TAO Toolkit Developer Forums
  • Deploy your models for video analytics application using DeepStream. Learn more about DeepStream SDK
  • Deploy your models in Riva for ConvAI use case.

Ethical Considerations:

Training and evaluation dataset is sourced from North America. More inclusive training and evaluation dataset would include content from other parts of the world.

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Promise and the Explainability, Bias, Safety & Security, and Privacy Subcards.