The Transfer Learning Toolkit (TLT) Computer Vision Inference Pipeline for x86 requires several containers:
These containers are specifically built for x86 machines with NVIDIA GPUs. Please check the Requirements and Installation before usage.
The NVIDIA Triton Inference Server is hosted on the NVIDIA GPU Cloud (NGC). This exists as another process and serves inferences to the Client container, which houses applications and sample usage for the TLT Computer Vision API.
The Server Utilities container contains the folder structure and libraries necessary for the NVIDIA Triton Inference Server to serve inferences. This container also allows for simple TLT model conversion from the TLT Computer Vision Quick Start.
The Client Container provides an environment with the TLT CV Inference Pipeline libraries and open-source demos that enable developers to build and deploy custom applications.
The included demos are as follows:
Each of these demos leverages appicable Transfer Learning Toolkit models, which can be retrained.
These open source demos highlight the C++ API that allows for inference requests
These can run using a webcam device or a video file/stream. There is also an API that allows for custom image decode. More information is provided in the demo source and the API Documentation in the Quick Start. Further configuration documentation is also provided through the Transfer Learning Toolkit Documentation.
Some examples of applications that one can build are event-based applications. If a person is providing a thumbs-up gesture or is looking directly at the camera, the gesture recognition and gaze estimation APIs can allow for the detection of these events.
License for TLT Computer Vision Inference Pipeline containers is included within the containers at
workspace/TLT-CV-Inference-Pipeline-EULA.pdf. License for the pre-trained models are available with the model files. By pulling and using the Transfer Learning Toolkit SDK (TLT) Computer Vision Inference Pipeline and downloading models, you accept the terms and conditions of these licenses.
NVIDIA's platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model's developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.