TAO (Train Adapt Optimize) Toolkit is a python based AI toolkit that's built on TensorFlow and PyTorch. It provides transfer learning capability to adapt popular neural network architectures and backbones to your data, allowing you to train, fine-tune, prune, quantize and export highly optimized and accurate AI models for edge deployment.
The purpose built pre-trained models accelerate the AI training process and reduce costs associated with large scale data collection, labeling, and training models from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection and more. TAO supports training for CV and 3D Point cloud modalities.
TAO packages a collection of containers, python wheels, models and helm chart. AI training tasks run either on TensorFlow or PyTorch depending upon the entrypoint for the model.
For deployment, TAO models can be deployed to DeepStream for video analytics applications, or Triton for inference serving use cases.
All containers needed to run TAO can be pulled from this location. See the list below for all available containers in this registry.
TAO Container Type | container_name:tag | What's it used for? |
---|---|---|
TAO TensorFlow v1 container | nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 | Older CV networks like YOLOs, FasterRCNN, DetectNet_v2, MaskRCNN, UNET and more |
TAO TensorFlow v2 container | nvcr.io/nvidia/tao/tao-toolkit:5.5.0-tf2 | CV networks like EfficientDet, EfficientNet and more |
TAO PyTorch container | nvcr.io/nvidia/tao/tao-toolkit:5.5.0-pyt | Newer CV networks like Deformable-DETR, SegFormer and more as well as all ConvAI networks |
TAO Deploy container | nvcr.io/nvidia/tao/tao-toolkit:5.5.0-deploy | Container used to TensorRT engine, INT8 calibration from a trained TAO model and evaluation on said TensorRT engine |
TAO Data Service | nvcr.io/nvidia/tao/tao-toolkit:5.5.0-dataservice | Container for AI-assisted annotation and few other data services |
TAO API container | nvcr.io/nvidia/tao/tao-toolkit:5.5.0-api | Front-end services container that can be used to host a TAO REST API server for remote execution of model training tasks. Useful for building higher level services |
TAO offers several highly accurate purpose-built pre-trained models, foundation models and generic pre-trained started models for a variety of vision AI tasks. Developers, system builders and software partners building intelligent vision AI apps and services, can bring their own data and train with and fine-tune pre-trained models instead of going through the hassle of large data collection and training from scratch.
All the pretrained models, packaged and released as part of TAO, are captured in the TAO documentation. These models are also linked to this collection as model entities.
Refer to the TAO Quick Start Guide to get started with TAO.
TAO getting Started License for TAO containers is included in the banner of the container. License for the pre-trained models are available with the model cards on NGC. By pulling and using the Train Adapt Optimize (TAO) Toolkit container to download models, you accept the terms and conditions of these licenses.
Nvidia Inference Microservices for trying out TAO models
A gradio app to try out zero-shot in context segmentation using the SEGIC model in the TAO PyTorch GitHub.
A Triton inference application for the FoundationPose model in TAO Triton Apps.
A GitHub repository containing called metropolis_nim_workflows reference workflows using the published NIMs
New Foundational Models and Training Capabilities with NVIDIA TAO 5.5
Train like a 'pro' with AutoML in TAO
Deploy TAO on Azure ML
Synthetic Data and TAO
Action Recognition Blog
Real-time License Plate Detection
2 Pose Estimation: Part 1
Part 2
Building ConvAI with TAO Toolkit
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.