Linux / amd64
NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed to accelerate deployment of generative AI across cloud, data center, and workstations.
Benefits of self-hosted NIMs:
Deploy anywhere and maintain control of generative AI applications and data
Streamline AI application development with industry standard APIs and tools tailored for enterprise environments
Prebuilt containers for the latest generative AI models, offering a diverse range of options and flexibility right out of the gate
Industry-leading latency and throughput for cost-effective scaling
Support for custom models out of the box so models can be trained on domain specific data
Enterprise-grade software with dedicated feature branches, rigorous validation processes, and robust support structures
Please visit the RIVA NMT NIM to get started.
Please review the Security Scanning tab to view the latest security scan results.
For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.
Get access to knowledge base articles and support cases or submit a ticket.
Visit the NVIDIA AI Enterprise Documentation Hub for release documentation, deployment guides and more.
Go to the NVIDIA Licensing Portal to manage your software licenses.
This container is licensed under the NVIDIA AI Product Agreement. By pulling and using this container, you accept the terms and conditions of this license.
You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.