Linux / amd64
Linux / arm64
This container houses the Llama-3.1-8B-Instruct, which is an 8 billion parameter, instruction-tuned large language model created by Meta. This model is part of the Llama 3.1 family of open-access models and is specifically optimized for dialogue and conversational use cases, making it highly capable of following user instructions to perform a wide variety of natural language processing tasks.
The container components are ready for commercial/non-commercial use.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA [meta-llama/Llama-3.1-8B-Instruct]
(https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products; and the use of this model which is governed by the NVIDIA Community Model License Agreement.
ADDITIONAL INFORMATION: Llama 3.1 Community License Agreement. Built with Llama.
You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.
Global
Build.Nvidia.com 07/23/2024 via
https://build.nvidia.com/meta/llama-3_1-70b-instruct/modelcard
Huggingface 07/23/2024 via
scb10x/llama3.1-typhoon2-70b-instruct · Hugging Face
NGC 07/23/2024 via
https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/llama-3_1-8b-instruct-nemo
Llama-3.1-8B-Instruct Container includes the following model:
Model Name & Link | Use Case | How to Pull the Model |
---|---|---|
Llama-3.1-8B-Instruct | A conversational AI model for instruction-following, content generation, summarization, and coding assistance. | Automatic |
Visit the NIM Container LLM page for release documentation, deployment guides, and more.
Get access to knowledge base articles and support cases or submit a ticket.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Llama-3.1-8B-Instruct-1.13.0
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal developer team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.
You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.