Linux / amd64
NVIDIA ACE Agent is a GPU-accelerated SDK for building conversational AI agents or bots that are powered by LLMs, customized for your use case and deliver real-time performance. It offers a complete workflow to build and deploy virtual agents that can support multi-turn and multi-user contextual conversation flow. It provides connectivity between AI skills like NVIDIA Riva Speech AI, NVIDIA ACE Avatar AI & Vision AI, use case specific custom plugins, and user interfaces through efficient system integration and composable dialog management.
ACE Agent NLP server exposes a unified RESTful interface for integrating various NLP models and tasks. The NLP server can deploy models using the NVIDIA Triton Inference Server and supports NVIDIA TensorRT, PyTorch, ONNX, and Python backends. You can also deploy Hugging Face supported models using PyTriton or integrate externally deployed models by writing a custom model client using @model_api decorator.
Follow the NVIDIA ACE Agent documentation for getting started.
For optimal performance, deploy the supported NVIDIA AI Enterprise Infrastructure software with this NIM.
Please review the Security Scanning tab to view the latest security scan results.
For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.
Get access to knowledge base articles and support cases or submit a ticket.
Visit the NVIDIA AI Enterprise Documentation Hub for release documentation, deployment guides and more.
Go to the NVIDIA Licensing Portal to manage your software licenses. licensing portal for your products. Get Your Licenses
This container is licensed under the NVIDIA AI Product Agreement. By pulling and using this container, you accept the terms and conditions of this license.
You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.