The BodyPoseNet models described in this card are used for multi-person human pose estimation network, which aims to predict the skeleton for every person in a given input image which consists of keypoints and the connections between them. This follows a single shot bottom-up methodology and there is no need for a person detector. Hence, the compute does not scale linearly with the number of people in the scene. The pose / skeleton output is commonly used as input for applications like activity/gesture recognition, fall detection, posture analysis, among others.
The default model predicts 18 keypoints including nose, neck, right_shoulder, right_elbow, right_wrist, left_shoulder, left_elbow, left_wrist, right_hip, right_knee, right_ankle, left_hip, left_knee, left_ankle, right_eye, left_eye, right_ear, left_ear.
This is a fully convolutional model with architecture consisting of a backbone network (like VGG), an initial estimation stage which does a pixel-wise prediction of confidence maps (heatmaps) and part affinity fields followed by multistage refinement (0 to N stages) on the initial predictions.
The training algorithm optimizes the network to minimize the loss on confidence maps (heatmaps) and part affinity fields for given image and ground truth pose labels.
The available pretrained model is trained on a subset of the Google OpenImages dataset.
The inference performance of BodyPoseNet v1.0 model was measured against COCO validation dataset.
The KPI for the evaluation data are reported in the table below.
The inference performance is measured for INT8 precision and for a input dimension of 288x384. The inference performance runs with
trtexec on Jetson Nano, AGX Xavier, Xavier NX and NVIDIA T4 GPU. The Jetson devices run at Max-N configuration for maximum system performance. The end-to-end performance with streaming video data might slightly vary depending on use cases of applications.
The models in this page can only be used with Train Adapt Optimize (TAO) Toolkit. TAO provides a simple command line interface to train a deep learning model for body pose estimation.
Primary use case for this model is to detect human poses in a given RGB image. BodyPoseNet is commonly used for activity/gesture recognition, fall detection, posture analysis etc.
Install the NGC CLI from ngc.nvidia.com
Configure the NGC CLI using the following command
ngc config set
ngc registry model list nvidia/tao/bodyposenet:*
ngc registry model download-version nvidia/tao/bodyposenet:<template> --dest <path>
Network accepts H X W x 3 input. The images are pre-processed to handle normalization, resizing while maintaining the aspect ratio etc.
Network outputs two tensors: confidence maps (H1' x W1' x C) and part affinity fields (H2' x W2' x P). After NMS and bipartite graph matching, we obtain final results with M x N X 3
BodyPoseNet model does not give good results for very crowded scenes, especially if detecting the pose for small-scale people in the image.
The network may have difficulty estimating poses of people who are occluded by other objects or persons.
The network may have difficulty estimating poses of people when there exists no distinction with the background (for example, estimation failure may occur for a person wearing a black sweater against a dark background).
deployable models are encrypted and will only operate with the following key:
Please make sure to use this as the key for all TAO commands that require a model load key.
This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, please visit this link, or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.