Nvidia Transfer Learning (NVTL) API is a cloud service that enables building end-to-end AI models using custom datasets. In addition to exposing NVTL Toolkit functionality through APIs, the service also enables a client to build end-to-end workflows - creating datasets, models, obtaining pretrained models from NGC, obtaining default specs, training, evaluating, optimizing, and exporting models for deployment on edge. NVTL jobs run on GPUs within a multi-node cloud cluster.
Nvidia Transfer Learning API overview
One can develop client applications on top of the provided API, such as a Web-UI application, or use the provided NVTL remote client CLI.
The API allows you to create datasets and upload their data to the service or pull data from a public cloud link directly to the service without uploading. You then create models and can create experiments by linking models to train, eval, and inference datasets.
Actions such as train, evaluate, prune, retrain, export, and inference can be spawned using API calls. For each action, you can request the action's default parameters, update said parameters to your liking, then pass them while running the action. The specs are in the JSON format.
The service exposes a Job API endpoint that allows you to cancel, download, and monitor jobs. Job API endpoints also provide useful information such as epoch number, accuracy, loss values, and ETA. Further, the service demarcates different users inside a cluster and can protect read-write access.
TAO Toolkit Workflow
The NVTL remote client is an easy to use Command line interface that uses API calls to expose an interface similar to TAO Launcher CLI.
The API service can run on any Kubernetes platform. The platforms officially supported are AWS EKS, Azure AKS, Google GCP and Bare-Metal.
This instance contains an easy to deploy helm chart for TAO Toolkit APIs.
Follow the instructions mentioned in the NVTL API documentation to setup a bare-metal kubernetes instance or an AWS EKS instance
Once you have setup the kubernetes instance, you may deploy the NVTL API by following the instructions in this section.
You may update the values.yaml
of the chart before deployment.
image
is the location of the NVTL API container imagehost
, tlsSecret
, corsOrigin
and authClientID
are for future ingress rules assuring security and privacyimagePullSecret
is the secret name that you setup to access Nvidia's nvcr.io registryimagePullPolicy
is set to Always fetch from nvcr.io instead of using locally cached imagestorageClassName
is the storage class created by your K8s Storage Provisioner. On bare-metal deployment it is nfs-client, and on AWS EKS can be standard. Not providing a value would make your deployment use your K8s cluster's default storage classstorageAccessMode
is set to ReadWriteMany to reuse allocated storage between deployemnts, or ReadWriteOnce to create a new storage at every deployementstorageSize
is ignored by many Storage Provisioners. But here would be where to set your shared storage sizebackend
is the platform used for training jobs. Defaults to local-k8smaxNumGpuPerNode
is the number of GPU assigned to each job. Note that multi-node training is not yet supported, so one would be limited to the number of GPUs within a cluster node for nowhelm install nvtl-api https://helm.ngc.nvidia.com/nvidia/tao/charts/nvtl-api-5.3.0.tgz --namespace default
You can validate your deployment. Check for the Ready or Completed states.
kubectl get pods -n default
You can debug your deployment. Look for events toward the bottom.
kubectl describe pods nvtl -n default
Common issues are:
TAO Toolkit getting Started
License for TAO containers is included within the container at workspace/EULA.pdf
. License for the pre-trained models are available with the model files. By pulling and using the Train Adapt Optimize (TAO) Toolkit container to download models, you accept the terms and conditions of these licenses.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.