Linux / arm64
The TAO: Train, Adapt, and Optimize (TAO) Toolkit Computer Vision Inference Pipeline for L4T requires several containers:
These containers are specifically built for NVIDIA Jetson devices running Jetpack with Linux for Tegra (L4T). Please check the Requirements and Installation before usage.
The NVIDIA Triton Inference Server built for L4T is provided through GitHub. This exists as another process and serves inferences to the Client container, which houses applications and sample usage for the TAO Toolkit Computer Vision API.
The Server Utilities container contains the folder structure and libraries necessary for the NVIDIA Triton Inference Server to serve inferences. This container also allows for simple TAO model conversion from the TAO Toolkit Computer Vision Quick Start.
The Client Container provides an environment with the TAO Toolkit CV Inference Pipeline libraries and open-source demos that enable developers to build and deploy custom applications.
The included demos are as follows:
Each of these demos leverages appicable TAO models, which can be retrained.
These open source demos highlight the C++ API that allows for inference requests
These can run using a webcam device or a video file/stream. There is also an API that allows for custom image decode. More information is provided in the demo source and the API Documentation in the Quick Start. Further configuration documentation is also provided through the TAO Toolkit Documentation.
Some examples of applications that one can build are event-based applications. If a person is providing a thumbs-up gesture or is looking directly at the camera, the gesture recognition and gaze estimation APIs can allow for the detection of these events.
License for TAO Toolkit Computer Vision Inference Pipeline containers is included within the containers at
workspace/TAO-CV-Inference-Pipeline-EULA.pdf. License for the pre-trained models are available with the model files. By pulling and using the TAO Toolkit Computer Vision Inference Pipeline and downloading models, you accept the terms and conditions of these licenses.
NVIDIA's platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model's developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.