NGC Catalog
CLASSIC
Welcome Guest
Containers
Llama 4 Maverick 17B 128E Instruct

Llama 4 Maverick 17B 128E Instruct

For copy image paths and more information, please view on a desktop device.
Associated Products
Features
Description
This container houses Llama 4 Maverick model which is a general purpose multimodal, multilingual 128 MoE model with 17B parameters.
Publisher
Meta
Latest Tag
1.4
Modified
September 8, 2025
Compressed Size
13.45 GB
Multinode Support
No
Multi-Arch Support
No
1.4 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

Llama4 Maverick 17b 128e Container Overview

Description:

This container houses Llama 4 Maverick model which is a general purpose multimodal, multilingual 128 MoE model with 17B parameters.

The container components are ready for commercial/non-commercial use.

License/Terms of Use:

GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products; except for the model which is governed by the NVIDIA Community Model License. ADDITIONAL INFORMATION: Llama 4 Community License Agreement. Built with Llama.

Deployment Geography:

Global, except EU

Release Date:

Build.Nvidia.com April 5, 2025 via https://build.nvidia.com/meta/llama-4-maverick-17b-128e-instruct
Huggingface April 5, 2025 via https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct

Llama 4 Maverick 17b 128e :

The Llama 4 Maverick Container includes the following model:

Model Name & Link Use Case How to Pull the Model
https://build.nvidia.com/meta/llama-4-maverick-17b-128e-instruct A general purpose multimodal, multilingual 128 MoE model with 17B parameters. Automatic

Deployment Details:

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

For information on how to deploy this NIM, please visit - Get started

Enterprise Support

Get access to knowledge base articles and support cases or submit a ticket.

Container Version(s):

nvcr.io/nim/meta/llama-4-maverick-17b-128e-instruct:latest

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal developer team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.