NGC Catalog
CLASSIC
Welcome Guest
Containers
Llama-3.3-70b-Instruct

Llama-3.3-70b-Instruct

For copy image paths and more information, please view on a desktop device.
Associated Products
Features
Description
This container houses the Llama-3.3-70B-Instruct, a 70B parameter, text-only language model for chat, coding and data generation. It offers strong performance comparable to larger models but with lower hardware requirements, and is optimized for dialogue.
Publisher
NVIDIA
Latest Tag
1.12.0
Modified
August 27, 2025
Compressed Size
12 GB
Multinode Support
No
Multi-Arch Support
Yes
1.12.0 (Latest) Security Scan Results

Linux / arm64

Sorry, your browser does not support inline SVG.

Linux / amd64

Sorry, your browser does not support inline SVG.

Llama-3.3-70B-Instruct Overview

Description:

This container houses the Llama-3.3-70B-Instruct, which is an auto-regressive language model that uses an optimized transformer architecture. It is designed for text-based tasks such as multilingual chat, coding assistance, and synthetic data generation, and is particularly optimized for dialogue-based use cases. With 70 billion parameters, it provides strong performance that is comparable to larger models but with lower hardware requirements, and it does not process images or audio.

The container components are ready for commercial/non-commercial use.

Third-Party Community Consideration

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA[meta-llama/Llama-3.3-70B-Instruct]
(https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).

License/Terms of Use:

GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products; and the model is governed by the NVIDIA AI Foundation Models Community License Agreement.

ADDITIONAL INFORMATION: Llama 3.3 Community License Agreement. Built with Llama.

You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.

Deployment Geography:

Global

Release Date:

Build.Nvidia.com 12/17/2024 via
llama-3.3-70b-instruct Model by Meta | NVIDIA NIM

Github 12/13/2024 via
https://github.blog/changelog/2024-12-13-llama-3-3-70b-instruct-is-now-available-on-github-models-ga/

Huggingface 12/06/2024 via
https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct

Llama-3.3-70B-Instruct

Llama-3.3-70B-Instruct Container includes the following model:

Model Name & Link Use Case How to Pull the Model
Llama-3.3-70B-Instruct A powerful conversational AI model for tasks like chat, question answering, and coding assistance. Automatic

Deployment Details:

Visit the NIM Container LLM page for release documentation, deployment guides, and more.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Container Version(s):

Llama-3.3-70B-Instruct-1.12.0

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal developer team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.

You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.