Inference TAO Action Recognition using Quick Deploy
This is part 2 of TAO workflow on Vertex AI. For part 1, refer to Train TAO Action Recognition using Quick Deploy resource.
NVIDIA TAO Toolkit on Google Vertex AI
The NVIDIA TAO Toolkit, an AI training toolkit which simplifies the model training and inference optimization process using pretrained models and simple CLI interface. The result is an ultra-streamlined workflow. Bring your own models or use NVIDIA pre-trained models and adapt them to your own real or synthetic data, then optimize for inference throughput. All without needing AI expertise or large training datasets.
TAO Toolkit workflows can be deployed on Google Vertex AI using the Quick Deploy.
About Quick Deploy
The quick deploy feature automatically sets up the Vertex AI instance with an optimal configuration, preloads the dependencies, runs the software from NGC without any need to set up the infrastructure.
TAO Action Recognition
In this workflow, you will optimize and run inference on an action recognition model using ActionRecognitionNet pretrained model and TAO. TAO Action Recognition is a configurable model to train a 2D or 3D neural network using the ResNet backbone. The pretrained model that you use for training has been trained on 5 classes from the HMDB51 dataset. More information about this model can be found in ActionRecognitionNet model card.
Get Started with TAO
To help you get started, we have created a few Jupyter Notebooks that can be easily deployed on Vertex AI using NGC’s quick deploy feature. This feature automatically sets up the Vertex AI instance with an optimal configuration needed for training the model.
The workflow is divided into 2 Jupyter notebooks - one for training and one for model inference and optimization.
Model Optimization and Inference
Use the notebook in this resource for model inference and optimization. In this notebook, you will run inference on a 3D action recognition model.
In this notebook, you will learn how to leverage the simplicity and convenience of TAO to:
- Use a Trained 3D RGB model for action recognition on the subset of HMDB51 dataset.
- Evaluate the trained model.
- Run Inference on the trained model.
- Export the trained model to a .etlt file for deployment to DeepStream.
Simply click on the button that reads “Deploy to Vertex AI” and follow the instructions.
Note: A customized kernel for the Jupyter Notebook is used as the primary mechanism for deployment. This kernel has been built on the TAO Toolkit container. For more information on the container itself, please refer to this link for more information:
The container version for this notebooks is nvcr.io/nvidia/tao/tao-toolkit-pyt:v3.21.11-py3
To train and fine-tune Action Recognition on your dataset, please refer to the Train TAO Action Recognition using Quick Deploy resource.
By pulling and using the TAO Toolkit container, you accept the terms and conditions of these licenses.
- Train like a ‘pro’ without being an AI expert using TAO AutoML
- Developing and Deploying AI-powered Robots with NVIDIA Isaac Sim and NVIDIA TAO
- Learn endless ways to adapt and supercharge your AI workflows with TAO - Whitepaper
- Customize Action Recognition with TAO and deploy with DeepStream
- Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
- Learn how to train real-time License plate detection and recognition app with TAO and DeepStream.
- Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO
- More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
- TAO documentation
- Read the TAO getting Started guide and release notes.
- If you have any questions or feedback, please refer to the discussions on TAO Toolkit Developer Forums
- Deploy your models for video analytics application using DeepStream. Learn more about DeepStream SDK
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.