Train Adapt Optimize (TAO) Toolkit is a python based AI toolkit for taking purpose-built pre-trained AI models and customizing them with your own data. TAO adapts popular network architectures and backbones to your data, allowing you to train, fine tune, prune and export highly optimized and accurate AI models for edge deployment.
The pre-trained models accelerate the AI training process and reduce costs associated with large scale data collection, labeling, and training models from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection and more.
Build end-to-end services and solutions for transforming pixels and sensor data to actionable insights using TAO, DeepStream SDK and TensorRT. TAO can train models for common vision AI tasks such as object detection, classification, instance segmentation as well as other complex tasks such as facial landmark, gaze estimation, heart rate estimation and others.
This resource lists out several sample notebooks to walk you through full training workflow using TAO 3.0.
To get started, first choose the model architecture that you want to build, then select the appropriate model card on NGC and then choose one of the supported backbones.
Setup your python environment using python virtualenv
and virtualenvwrapper
.
In TAO Toolkit, we have created an abstraction above the container, you will launch all your training jobs from the launcher. No need to manually pull the appropriate container, tao-launcher will handle that. You may install the launcher using pip with the following commands.
pip3 install nvidia-tao
jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
Purpose-built Model | Jupyter notebook |
---|---|
PeopleNet | detectnet_v2/detectnet_v2.ipynb |
TrafficCamNet | detectnet_v2/detectnet_v2.ipynb |
DashCamNet | detectnet_v2/detectnet_v2.ipynb |
FaceDetectIR | detectnet_v2/detectnet_v2.ipynb |
VehicleMakeNet | classification/classification.ipynb |
VehicleTypeNet | classification/classification.ipynb |
PeopleSegNet | mask_rcnn/mask_rcnn.ipynb |
PeopleSemSegNet | unet/unet_isbi.ipynb |
Bodypose Estimation | bpnet/bpnet.ipynb |
License Plate Detection | detectnet_v2/detectnet_v2.ipynb |
License Plate Recognition | lprnet/lprnet.ipynb |
Gaze Estimation | gazenet/gazenet.ipynb |
Facial Landmark | fpenet/fpenet.ipynb |
Heart Rate Estimation | heartratenet/heartratenet.ipynb |
Gesture Recognition | gesturenet/gesturenet.ipynb |
Emotion Recognition | emotionnet/emotionnet.ipynb |
FaceDetect | facenet/facenet.ipynb |
ActionRecognitionNet | action_recognition_net/actionrecognitionnet.ipynb |
PoseClassificationNet | pose_classification_net/pose_classificationnet.ipynb |
Pointpillars | pointpillars/pointpillars.ipynb |
Open model architecture | Jupyter notebook |
---|---|
DetectNet_v2 | detectnet_v2/detectnet_v2.ipynb |
FasterRCNN | faster_rcnn/faster_rcnn.ipynb |
YOLOV3 | yolo_v3/yolo_v3.ipynb |
YOLOV4 | yolo_v4/yolo_v4.ipynb |
YOLOv4-Tiny | yolo_v4_tiny/yolo_v4_tiny.ipynb |
SSD | ssd/ssd.ipynb |
DSSD | dssd/dssd.ipynb |
RetinaNet | retinanet/retinanet.ipynb |
MaskRCNN | mask_rcnn/mask_rcnn.ipynb |
UNET | unet/unet_isbi.ipynb |
Image Classification | classification/classification.ipynb |
EfficientDet | efficientdet/efficientdet.ipynb |