Linux / amd64
NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed to speed up generative AI deployment in enterprises. Supporting a wide range of AI models, including NVIDIA AI foundation and custom models, it ensures seamless, scalable AI inferencing, on-premises or in the cloud, leveraging industry standard APIs.
NVIDIA NIM for Vision Language Models (VLMs) (NVIDIA NIM for VLMs) brings the power of state-of-the-art vision language models (VLMs) to enterprise applications, providing unmatched natural language and multimodal understanding capabilities.
NIM makes it easy for IT and DevOps teams to self-host vision language models (VLMs) in their own managed environments while still providing developers with industry-standard APIs that allow them to build powerful copilots, chatbots, and AI assistants that can transform their business. Leveraging NVIDIA’s cutting-edge GPU acceleration and scalable deployment, NIM offers the fastest path to inference with unparalleled performance.
NVIDIA NIM for VLMs abstracts away model inference internals such as execution engine and runtime operations. NVIDIA NIM for VLMs provides the most performant option available whether it be with TRT-LLM, vLLM or others. NIM offers the following high-performance features:
And many more… The potential applications of NIM are vast, spanning across various industries and use cases.
Deploying and integrating NVIDIA NIM is straightforward thanks to our industry standard APIs. Visit the NIM Container VLM page for release documentation, deployment guides and more.
Please review the Security Scanning (LINK) tab to view the latest security scan results.
For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning (LINK) tab.
Get access to knowledge base articles and support cases or submit a ticket.
Visit the NIM Container LLM page for release documentation, deployment guides and more.
The NIM container is governed by the NVIDIA Software License Agreement; and the Product Specific Terms for AI Products; and the use of this model is governed by the NVIDIA AI Foundation Models Community License Agreement. ADDITIONAL INFORMATION: Llama 3.2 Community License Agreement, Built with Llama.
You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.