The model described in this card is action recognition network, which aims to recognize what people do in videos. Two pretrained ActionRecognitionNet models are delivered --- one is a 2D model and another is a 3D model. Both models are trained on a subset of HMDB51 with RGB frames as input.
Both 2D and 3D models are with ResNet-style backbone. They will take a sequence of RGB frames as input and predict the action label of those frames.
The training algorithm optimizes the network to minimize the cross entropy loss for classification.
The models are trained on a subset of HMDB51. We pick videos of walk, ride_bike, run, fall_floor and push out of HMDB51 to form HMDB5. The training videos are varied in visible body parts, camera motion, camera viewpoint, number of people involved in the action and video quality. The dataset statistics:
|classes||number of videos|
visible body parts: upper body, full body, lower body
camera motion: motion, static
camera view point: front, back, left, right
number of people involved in the action: single, two, three
video quality: good, medium, bad
video size: most of videos are in 320x240
The data format must be in the following format.
/Dataset_01 /class_1 /video_1 /rgb 0000.png 0001.png 0002.png ... ... ... N.png
TAO toolkit support training ActionRecognitionNet with RGB input. The dataset should be divided into different directory by classes. Each of classes directory will contain multiple video clips folders which contain the corresponding RGB frames (rgb). TAO toolkit also support training with optical flow and RGB + optical flow. Pretrained model with optical flow and RGB + optical flow will be added in a future release.
The evaluation dataset are obtained by randomly collecting 10% video per class out of HMDB5. The videos are also diversed by visible body parts/camera motion/camera viewpoint/number of people involved in the action/video quality.
The key performance indicator is the accuracy of action recognition. The
center evaluation inference is performed on the middle part of frames in the video clip. For example, if the model requires 32 frames as input and a video clip has 128 frames, then we will choose the frames from index 48 to index 79 to do the inference. The
conv evaluation inference is performed on 10 segments out of a video clip. We uniformly divide the video clip into 10 parts, choose center of each segments as start point and then pick 32 consecutive frames from those start points to form the inference segments. And the final label of the video is determined by the average score of those 10 segments.
|model||dataset||center accuracy||conv accuracy|
The inference uses FP16 precision. The inference performance runs with
trtexec on Jetson Nano, Xavier NX, AGX Xavier and NVIDIA T4 GPU. The Jetson devices run at Max-N configuration for maximum system performance. The data is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on use cases of applications.
This model needs to be used with NVIDIA Hardware and Software. For Hardware, the model can run on any NVIDIA GPU including NVIDIA Jetson devices. This model can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.
Primary use case intended for this model is to recognize the action from the sequence of RGB frames. The sequence number is 32.
There are two models provided:
They are intended for training and fine-tune using Train Adapt Optimize (TAO) Toolkit and the users' dataset of action recognition. High fidelity models can be trained to the new use cases. The Jupyter notebook available as a part of TAO container can be used to re-train.
These models are also intended for easy deployment to the edge using DeepStream SDK or TensorRT. DeepStream provides facility to create efficient video analytic pipelines to capture, decode and pre-process the data before running inference.
The models are encrypted and can be decrypted with the following key:
Please make sure to use this as the key for all TAO commands that require a model load key.
the classification logits
In order to use these models as pretrained weights for transfer learning, please use the snippet below as a template for the
model_config component of the experiment spec file to train a 2D/3D ActionRecognitionNet. For more information on experiment spec file, please refer to the Train Adapt Optimize (TAO) Toolkit User Guide.
model_config: model_type: rgb input_type: "2d" # input_type: "3d" backbone: resnet18 rgb_seq_length: 32 rgb_pretrained_model_path: /workspace/action_recognition/resnet18_2d_rgb_hmdb5_32.tlt # rgb_pretrained_model_path: /workspace/action_recognition/resnet18_3d_rgb_hmdb5_32.tlt rgb_pretrained_num_classes: 5 sample_rate: 1
To create the entire end-to-end video analytic application, deploy this model with DeepStream SDK. DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. DeepStream supports direct integration of this model into the deepstream sample app.
To deploy this model with DeepStream 6.0, please refer to the sample code: sources/apps/sample_apps/deepstream-3d-action-recognition/ in Deepstream SDK
NVIDIA ActionRecognitionNet is trained on HMDB5 which is a subset of HMDB51 containing 1024 videos in total. So it is expected the accuracy of the model on videos other than those from HMDB5 is not at the same level as the number reported in performance section.
In general, to get better accuracy, more data is needed to finetune the pretrained model through TAO Toolkit.
License to use these models is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses.
NVIDIA ActionRecognitionNet model classify the action in a sequence of frames.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.