Linux / amd64
NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed to speed up generative AI deployment in enterprises. Supporting a wide range of AI models, including NVIDIA AI foundation and custom models, it ensures seamless, scalable AI inferencing, on-premises or in the cloud, leveraging industry standard APIs.
Kanana 1.5, a newly introduced version of the Kanana model family from Kakao Corp, presents substantial enhancements in coding, mathematics, and function calling capabilities over the previous version, enabling broader application to more complex real-world problems. This new version now can handle up to 32K tokens length natively and up to 128K tokens using YaRN, allowing the model to maintain coherence when handling extensive documents or engaging in extended conversations. Furthermore, Kanana 1.5 delivers more natural and accurate conversations through a refined post-training process.
NVIDIA NIM offers prebuilt containers for large language models (LLMs) that can be used to develop chatbots, content analyzers—or any application that needs to understand and generate human language. Each NIM consists of a container and a model and uses a CUDA-accelerated runtime for all NVIDIA GPUs, with special optimizations available for many configurations. Whether on-premises or in the cloud, NIM is the fastest way to achieve accelerated generative AI inference at scale.
NVIDIA NIM for LLMs abstracts away model inference internals such as execution engine and runtime operations. NVIDIA NIM for LLMs provides the most performant option available whether it be with TRT-LLM, vLLM or others.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to the kanana-1.5-8b-instruct Model Card.
Deploying and integrating NVIDIA NIM is straightforward thanks to our industry standard APIs. Visit the NIM Container LLM page for release documentation, deployment guides and more.
Please review the Security Scanning (LINK) tab to view the latest security scan results.
For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning (LINK) tab.
Get access to knowledge base articles and support cases or submit a ticket.
Visit the NIM Container LLM page for release documentation, deployment guides and more.
The NIM container is governed by the NVIDIA AI Enterprise Software License Agreement | NVIDIA; and the use of this model is governed by the ai-foundation-models-community-license.pdf (nvidia.com). ADDITIONAL INFORMATION: See Apache 2.0 License.
You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.