TAO Toolkit API Helm

TAO Toolkit API Helm

Logo for TAO Toolkit API Helm
Helm Chart to deploy TAO Toolkit APIs
Latest Version
Compressed Size
131.46 KB
December 12, 2023

TAO Toolkit API - Helm Chart

TAO Toolkit API is a Kubernetes service that enables building end-to-end AI models using custom datasets. In addition to exposing TAO Toolkit functionality through APIs, the service also enables a client to build end-to-end workflows - creating datasets, models, obtaining pretrained models from NGC, obtaining default specs, training, evaluating, optimizing and exporting models for deployment on edge. It can be easily installed on a Kubernetes cluster (local / AWS EKS) using a Helm chart along with minimal dependencies. TAO toolkit jobs can be run using GPUs available on the cluster and can scale to a multi-node setting.

TAO Toolkit API overview

One can develop client applications on top of the provided API, such as a Web-UI application, or use the provided TAO remote client CLI.

The API allows users to create datasets and upload their data to the service. Users then create models and can create experiments by linking models to train, eval and inference datasets. Actions such as train, evaluate, prune, retrain, export and inference can be spawned through simple API calls. For each action, the user can obtain default specs using a HTTP GET and POST the spec they prefer for that action. The specs are in the JSON format. Another unique feature of the Service is the ability to chain jobs. For example, a user can run train and evaluate using a single API call. This abstracts away complex directory manipulations and dependency checks. The service exposes a Job API which allows a user to cancel, download and monitor jobs. Job APIs also provide useful information such as epoch number, accuracy, loss values and ETA information. Further, the service demarcates different users inside a cluster and can protect read-write access.

TAO Toolkit Workflow

The TAO remote client is an easy to use Command line interface that uses API calls to expose an interface similar to TAO Launcher CLI.

The API service can run on any Kubernetes platform. The two platforms officially supported are AWS EKS and Bare-Metal.

This instance contains an easy to deploy helm chart for TAO Toolkit APIs.

Setting up the TAO Toolkit API

  1. Follow the instructions mentioned in the TAO Toolkit API documentation to setup a bare-metal kubernetes instance or an AWS EKS instance

  2. Once you have setup the kubernetes instance, you may deploy the TAO Toolkit API by following the instructions in this section.

You may update the values.yaml of the chart before deployment.

  • host, tlsSecret, corsOrigin and authClientID are for future ingress rules assuring security and privacy
  • imagePullSecret is the secret name that you setup to access Nvidia's nvcr.io registry
  • imagePullPolicy is set to Always fetch from nvcr.io instead of using locally cached image
  • storageClassName is the storage class created by your K8s Storage Provisioner. On bare-metal deployment it is nfs-client, and on AWS EKS can be standard. Not providing a value would make your deployment use your K8s cluster's default storage class
  • storageAccessMode is set to ReadWriteMany to reuse allocated storage between deployemnts, or ReadWriteOnce to create a new storage at every deployement
  • storageSize is ignored by many Storage Provisioners. But here would be where to set your shared storage size
  • backend is the platform used for training jobs. Defaults to local-k8s
  • numGpu is the number of GPU assigned to each job. Note that multi-node training is not yet supported, so one would be limited to the number of GPUs within a cluster node for now

Optional MLOPS setting for Weights And Biases

  • wandbApiKey: Weights and biases API key for your wandb.ai account.

Optional MLOPS setting for ClearML

  • clearMlWebHost The value of the CLEARML_WEB_HOST environment variable generated when creating a clearml credential.
  • clearMlApiHost The value of the CLEARML_API_HOST environment variable generated when creating a clearml credential.
  • clearMlFilesHost The value of the CLEARML_FILES_HOST environment variable generated when creating a clearml credential.
  • clearMlApiAccessKey The value of the CLEARML_API_ACCESS_KEY environment variable generated when creating a clearml credential.
  • clearMlApiSecretKey The value of the CLEARML_API_SECRET_KEY environment variable generated when creating a clearml credential.
helm install tao-toolkit-api chart/ --namespace default

You can validate your deployment. Check for the Ready or Completed states.

kubectl get pods -n default

You can debug your deployment. Look for events toward the bottom.

kubectl describe pods tao-toolkit-api -n default

Common issues are:

  • GPU Operator or Storage Provisioner pods not in Ready or Completed states
  • Missing or invalid imagepullsecret


TAO Toolkit getting Started License for TAO containers is included within the container at workspace/EULA.pdf. License for the pre-trained models are available with the model files. By pulling and using the Train Adapt Optimize (TAO) Toolkit container to download models, you accept the terms and conditions of these licenses.

Technical blogs

Suggested reading

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.