The PeopleSemSegNet Autonomous Mobile Robot (AMR) model detects one or more “person” object within an image and returns a semantic segmentation mask for all people within an image. This model is optmized for Isaac Perceptor product based on the deployabel PeopleSemSegNet version from TAO, which is ready for commercial use.
Architecture Type: Convolution Neural Network (CNN)
Network Architecture: UNet
UNet is a widely adopted network for performing semantic segmentation, which has applications in autonomous vehicles, industries, smart cities, etc. UNet is a fully convolutional network with an encoder that is comprised of convolutional layers and a decoder that is comprised of transposed convolutions or upsampling layers. It then predicts a class label for every pixel in the input image.
Input Type(s): Images
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: 4D
Other Properties Related to Input: RGB Fixed Resolution: 960 X 544 X 3 (W x H x C) Channel Ordering of the Input: NHWC, where N = Batch Size, C = number of channels (3), H = Height of images (544), W = Width of the images (960) Input scale: 1/255.0 Mean subtraction: None; No minimum bit depth, alpha, or gamma.
Output Type(s): Label(s), Segmentation Mask, Confidence Scores
Output Format: Label: Text String(s); Segmentation Mask, Confidence Scores: Floating Point
Other Properties Related to Output: Category Label(s): Bag, Face, Person, Segmentation Mask, Confidence Scores
Data Collection Method by dataset: Automatic/Sensors
Labeling Method by dataset: Human
This model was trained using the DetectNet_v2 entrypoint in TAO. The training algorithm optimizes the network to minimize the localization and confidence loss for the objects. The training is carried out in two phases. In the first phase, the network is trained without regularization.
Internal, proprietary dataset with more than 5 million objects for person class. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV). Approximately half of the training data consisted of images captured in an indoor office environment. For this case, the camera is typically set up at approximately 10 feet height, 45-degree angle and has close field-of-view. This content was chosen to improve accuracy of the models for extended arms pose of people. We have also added approximately 45 thousand images with low-density scenes from a robot's point of view to improve the performance for use-cases where person object detection is needed at low heights.
Object | ||
---|---|---|
Environment | Images | Persons |
5ft Indoor | 108,692 | 1,060,960 |
fft Outdoor | 206,912 | 166,8250 |
1Oft Indoor (Office close FOV) | 413,270 | 4,577,870 |
1Oft Outdoor | 18,321 | 178,817 |
20ft Indoor | 104,972 | 1,079,550 |
20ft Outdoor | 24,783 | 59,623 |
Robotics Subset | 43076 | 160806 |
Total | 920,026 | 8,785,876 |
The training dataset is created by labeling ground-truth bounding-boxes and categories by human labellers. Following guidelines were used while labelling the training data for NVIDIA PeopleSemSegNet AMR model. If you are looking to re-train with your own dataset, please follow the guideline below for highest accuracy.
PeopleSemSegNet AMR project labelling guidelines:
All objects that fall under one of the three classes (person, face, bag) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.
If a person is carrying an object please mark the bounding-box to include the carried object as long as it doesn’t affect the silhouette of the person. For example, exclude a rolling bag if they are pulling it behind them and are distinctly visible as separate object. But include a backpack, purse etc. that do not alter the silhouette of the pedestrian significantly.
Occlusion: For partially occluded objects that do not belong a person class and are visible approximately 60% or are marked as visible objects with bounding box around visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.
Occlusion for person class: If an occluded person’s head and shoulders are visible and the visible height is approximately 20% or more, then these objects are marked by the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Occlusion guidelines in item 3 above.
Truncation: For an object other than a person that is at the edge of the frame with visibility of 60% or more visible are marked with the truncation flag for the object.
Truncation for person class: If a truncated person’s head and shoulders are visible and the visible height is approximately 20% or more mark the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Truncation guidelines in item 5 above.
Each frame is not required to have an object.
The segmentation masks were labeled using NVIDIA internal auto-labeling tool
Evaluation Data Properties
Data Collection Method by dataset: Automatic/Sensors
Labeling Method by dataset: Human
5000 proprietary images across a variety of environments from a robot's point of view.
The KPI for the evaluation data are reported in the table below. Model is evaluated based on Mean Intersection-Over-Union. Mean Intersection-Over-Union (MIOU) is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes.
Model | PeopleSemSegNet AMR |
---|---|
Content | MIOU |
Robot's point of view | 87 |
These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.
The model is intended for training using TAO Toolkit with the user's own dataset or using it as it is. This can provide high fidelity models that are adapted to the use case. The Jupyter notebook available as a part of TAO container can be used to re-train.
Primary use case intended for the model is segmenting people in a color (RGB) image. The model can be used to segment people from photos and videos by using appropriate video or image decoding and pre-processing. Note this model performs semantic segmentation and not instance based segmentation.
NVIDIA PeopleSemSeg AMR model detects faces. However, no additional information such as race, gender, and skin type about the faces is inferred.
Training and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.