Augmenting an existing AI foundational model provides an advanced starting point and a low-cost solution that enterprises can leverage to generate accurate and clear responses to their specific use case. The Retrieval Augmented Generation (RAG)-based AI chatbot workflow accelerates building and deploying enterprise LLM solutions and is currently in private, early access for our NVIDIA AI Enterprise customers.
This RAG-based reference chatbot workflow contains:
Key benefits include:
To get started, review the documentation linked below and learn what is included in this RAG-based AI chatbot workflow, and how to run the workflow.
Learn more about generative AI through our Deep Learning Institute. Access the courses here.
Contact NVIDIA to learn more about options for accessing the AI Chatbot with Retrieval Augmented Generation workflow, Triton Inference Server, and NeMo.
By accessing NeMo as part of the AI chatbot with RAG workflow, you accept the terms and conditions of this End User License Agreement.