Linux / amd64
Triton Management Service (TMS) is a Kubernetes microservice intended to manage the deployment of AI models on Triton Inference Servers (TIS). The benefit of using TMS over manual or custom deployment solutions comes from TMS in-depth understanding of TIS and GPU hardware, and how they interact with various model frameworks such as PyTorch, TensorFlow, ONNX, and others. TMS strives to balance the deployment of the minimum number of TIS instances with the performance of TIS served AI models.
This container contains the primary server process for managing Triton deployments.
Getting started with Triton Management Service (TMS) Container Image
Triton Management Service (TMS) Container Image is exclusively available with NVIDIA AI Enterprise.
Before you start, ensure that your environment is set up by following one of the deployment guides available in the NVIDIA AI Enterprise Documentation.
Detailed documentation on Triton Management Service (TMS) are available:
The images may include components licensed under open-source licenses such as GPL. Any such source is included in the image under the /legal/source directory.
For optimal performance, deploy the supported NVIDIA AI Enterprise Infrastructure software with Triton Management Service (TMS).
The latest version of Triton Management Service (TMS) is compatible with:
Get access to knowledge base articles and support cases or submit a ticket.
Visit the NVIDIA AI Enterprise Documentation Hub for release documentation, deployment guides and more.
Go to the NVIDIA Licensing Portal to manage your software licenses. licensing portal for your products. Get Your Licenses