Clara AGX Triton Inference Server

Clara AGX Triton Inference Server

Logo for Clara AGX Triton Inference Server
Description
This release is Triton Inference Server built only with support for Clara AGX Hardware. Triton Inference Server (Formerly TensorRT inference Server) simplifies the deployment of AI models at scale in production and maximizes inference performance.
Publisher
NVIDIA
Latest Tag
21.05.1-v1-py3
Modified
October 5, 2023
Compressed Size
3.67 GB
Multinode Support
No
Multi-Arch Support
No
21.05.1-v1-py3 (Latest) Security Scan Results

Linux / arm64

Sorry, your browser does not support inline SVG.

This container is deprecated. It was released as part of the Clara Holoscan SDK v0.1 and will no longer be compatible with Clara Holoscan SDK v0.2.

What Is The Clara AGX Triton Inference Server?

This release is Triton Inference Server built only with support for Clara AGX Hardware.

Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Triton supports an HTTP/REST and GRPC protocol that allows remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton is available as a shared library with a C API that allows the full functionality of Triton to be included directly in an application. Three Docker images are available:

  • The xx.yy-py3 image contains the Triton inference server with support for Tensorflow, PyTorch, TensorRT, ONNX and OpenVINO models.

  • The xx.yy-py3-sdk image contains Python and C++ client libraries, client examples, and the Model Analyzer.

  • The xx.yy-py3-min image is used as the base for creating custom Triton server containers as described in Customize Triton Container.

For more information, refer to Triton Inference Server GitHub.

Running The Triton Inference Server

Before you can run an NGC deep learning framework container, your Docker environment must support NVIDIA GPUs. To run a container, issue the appropriate command as explained in the Running A Container chapter in the NVIDIA Containers And Frameworks User Guide and specify the registry, repository, and tags. For more information about using NGC, refer to the NGC Container User Guide.

The method implemented in your system depends on the AGX OS version installed, the specific NGC Cloud Image provided by a Cloud Service Provider, or the software that you have installed in preparation for running NGC containers.

Procedure

  1. Select the Tags tab and locate the container image release that you want to run.

  2. In the Pull Tag column, click the icon to copy the docker pull command.

  3. Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.

  4. Run the container image by following the directions in the Triton Inference Server Quick Start Guide.

Suggested Reading

For the latest Release Notes, see the Triton Inference Server Release Notes.

For a full list of the supported software and specific versions that come packaged with this framework based on the container image, see the Frameworks Support Matrix.

For more information about the Triton Inference Server, see:

License

Licenses are available and can be pulled as part of the procedure described. By pulling and using the container, you accept the terms and conditions of these licenses.