Linux / arm64
Linux / amd64
This container houses the Llama-3.3-70B-Instruct, which is an auto-regressive language model that uses an optimized transformer architecture. It is designed for text-based tasks such as multilingual chat, coding assistance, and synthetic data generation, and is particularly optimized for dialogue-based use cases. With 70 billion parameters, it provides strong performance that is comparable to larger models but with lower hardware requirements, and it does not process images or audio.
The container components are ready for commercial/non-commercial use.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA[meta-llama/Llama-3.3-70B-Instruct]
(https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products; and the model is governed by the NVIDIA AI Foundation Models Community License Agreement.
ADDITIONAL INFORMATION: Llama 3.3 Community License Agreement. Built with Llama.
You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.
Global
Build.Nvidia.com 12/17/2024 via
llama-3.3-70b-instruct Model by Meta | NVIDIA NIM
Github 12/13/2024 via
https://github.blog/changelog/2024-12-13-llama-3-3-70b-instruct-is-now-available-on-github-models-ga/
Huggingface 12/06/2024 via
https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
Llama-3.3-70B-Instruct Container includes the following model:
Model Name & Link | Use Case | How to Pull the Model |
---|---|---|
Llama-3.3-70B-Instruct | A powerful conversational AI model for tasks like chat, question answering, and coding assistance. | Automatic |
Visit the NIM Container LLM page for release documentation, deployment guides, and more.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Llama-3.3-70B-Instruct-1.12.0
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal developer team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.
You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.