NGC Catalog
CLASSIC
Welcome Guest
Helm Charts
NVIDIA Transfer Learning API - Helm Chart

NVIDIA Transfer Learning API - Helm Chart

For versions and more information, please view on a desktop device.
Description
Helm Chart to deploy NVIDIA Transfer Learning APIs.
Publisher
-
Latest Version
5.3.0
Compressed Size
139.78 KB
Modified
March 25, 2024

Nvidia Transfer Learning API - Helm Chart

Nvidia Transfer Learning (NVTL) API is a cloud service that enables building end-to-end AI models using custom datasets. In addition to exposing NVTL Toolkit functionality through APIs, the service also enables a client to build end-to-end workflows - creating datasets, models, obtaining pretrained models from NGC, obtaining default specs, training, evaluating, optimizing, and exporting models for deployment on edge. NVTL jobs run on GPUs within a multi-node cloud cluster.


Nvidia Transfer Learning API overview

One can develop client applications on top of the provided API, such as a Web-UI application, or use the provided NVTL remote client CLI.

The API allows you to create datasets and upload their data to the service or pull data from a public cloud link directly to the service without uploading. You then create models and can create experiments by linking models to train, eval, and inference datasets.

Actions such as train, evaluate, prune, retrain, export, and inference can be spawned using API calls. For each action, you can request the action's default parameters, update said parameters to your liking, then pass them while running the action. The specs are in the JSON format.

The service exposes a Job API endpoint that allows you to cancel, download, and monitor jobs. Job API endpoints also provide useful information such as epoch number, accuracy, loss values, and ETA. Further, the service demarcates different users inside a cluster and can protect read-write access.


TAO Toolkit Workflow

The NVTL remote client is an easy to use Command line interface that uses API calls to expose an interface similar to TAO Launcher CLI.

The API service can run on any Kubernetes platform. The platforms officially supported are AWS EKS, Azure AKS, Google GCP and Bare-Metal.

This instance contains an easy to deploy helm chart for TAO Toolkit APIs.

Setting up the NVTL API

  1. Follow the instructions mentioned in the NVTL API documentation to setup a bare-metal kubernetes instance or an AWS EKS instance

  2. Once you have setup the kubernetes instance, you may deploy the NVTL API by following the instructions in this section.

You may update the values.yaml of the chart before deployment.

  • image is the location of the NVTL API container image
  • host, tlsSecret, corsOrigin and authClientID are for future ingress rules assuring security and privacy
  • imagePullSecret is the secret name that you setup to access Nvidia's nvcr.io registry
  • imagePullPolicy is set to Always fetch from nvcr.io instead of using locally cached image
  • storageClassName is the storage class created by your K8s Storage Provisioner. On bare-metal deployment it is nfs-client, and on AWS EKS can be standard. Not providing a value would make your deployment use your K8s cluster's default storage class
  • storageAccessMode is set to ReadWriteMany to reuse allocated storage between deployemnts, or ReadWriteOnce to create a new storage at every deployement
  • storageSize is ignored by many Storage Provisioners. But here would be where to set your shared storage size
  • backend is the platform used for training jobs. Defaults to local-k8s
  • maxNumGpuPerNode is the number of GPU assigned to each job. Note that multi-node training is not yet supported, so one would be limited to the number of GPUs within a cluster node for now
helm install nvtl-api https://helm.ngc.nvidia.com/nvidia/tao/charts/nvtl-api-5.3.0.tgz --namespace default

You can validate your deployment. Check for the Ready or Completed states.

kubectl get pods -n default

You can debug your deployment. Look for events toward the bottom.

kubectl describe pods nvtl -n default

Common issues are:

  • GPU Operator or Storage Provisioner pods not in Ready or Completed states
  • Missing or invalid imagepullsecret

License

TAO Toolkit getting Started License for TAO containers is included within the container at workspace/EULA.pdf. License for the pre-trained models are available with the model files. By pulling and using the Train Adapt Optimize (TAO) Toolkit container to download models, you accept the terms and conditions of these licenses.

Technical blogs

  • Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - Part 1 | Part 2
  • Learn how to train real-time License plate detection and recognition app with TAO and DeepStream.
  • Model accuracy is extremely important, learn how you can achieve state of the art accuracy for classification and object detection models using TAO
  • Learn how to train Instance segmentation model using MaskRCNN with TAO
  • Learn how to improve INT8 accuracy using Quantization aware training(QAT) with TAO
  • Read the technical tutorial on how PeopleNet model can be trained with custom data using Transfer Learning Toolkit
  • Learn how to train and deploy real-time intelligent video analytics apps and services using DeepStream SDK

Suggested reading

  • More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone
  • Read the TAO getting Started guide and release notes.
  • If you have any questions or feedback, please refer to the discussions on TAO Toolkit Developer Forums
  • Deploy your model on the edge using DeepStream. Learn more about DeepStream SDK

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.