As humans, we are constantly on the move and performing several actions such as walking, running, and sitting every single day. These actions are a natural extension of our daily lives. Building applications that capture these specific actions can be extremely valuable in the field of sports for analytics, in healthcare for patient safety, in retail for a better shopping experience, and more.
This sample Jupyter notebook provided here, contains pretrained 2D and 3D Action Recognition models that you can retrain on Google Vertex AI, simply by deploying with the NGC One-Click Deploy feature.
The quick deploy feature automatically sets up the Vertex AI instance with an optimal configuration, preloads the dependencies, runs the software from NGC without any need to set up the infrastructure.
The pretrained model in this notebook is an action recognition network, which recognizes what people actions by interpreting from streaming videos. Six pretrained 2D and 3D action recognition models are delivered as part of this resource. Both The 2D and 3D models have been trained on a subset of HMDB51 - a large human motion database.
Two classes were chosen for this tutorial:
Following image shows some examples of these two groups of actions.
For more information about this model, including the classes used for training, please refer to this page: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/actionrecognitionnet
To fine-tune and customize the model, you’ll be using the TAO (Train, Adapt and Optimize) Toolkit. The TAO Toolkit, a low-code AI model development solution, leverages the power of transfer learning to help fine-tune pretrained models with your own data. Transfer learning which is the process of transferring learned features from one application to another. It is a commonly used training technique where you use a model trained on one task and retrain to use it on a different task. With the TAO Toolkit, you can customize models for tasks in computer vision, natural language processing and speech.
Once you have customized the model, you can then use the built-in optimization techniques such as model pruning and quantization to optimize the model for inference on the target GPU, without sacrificing accuracy.
All the training steps are covered in the Jupyter notebook.
To help you get started, we have created a sample Jupyter Notebook that can be easily deployed on Vertex AI using NGC’s One Click Deploy feature. This feature automatically sets up the Vertex AI instance with an optimal configuration, preloads the dependencies, runs the software from NGC without any need to set up the infrastructure.
Simply click on the button that reads “Deploy to Vertex AI” and follow the instructions.
Note: A customized kernel for the Jupyter Notebook is used as the primary mechanism for deployment. This kernel has been built on the TAO Toolkit container. For more information on the container itself, please refer to this link for more information: