NGC Catalog
CLASSIC
Welcome Guest
Containers
Llama 4 Scout 17B 16E Instruct

Llama 4 Scout 17B 16E Instruct

For copy image paths and more information, please view on a desktop device.
Logo for Llama 4 Scout 17B 16E Instruct
Associated Products
Features
Description
The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences.
Publisher
Meta
Latest Tag
1
Modified
July 3, 2025
Compressed Size
10.96 GB
Multinode Support
No
Multi-Arch Support
No
1 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

Llama 4 Scout 17B 16E Instruct Container Overview

Description:

The Llama 4 Scout (17Bx16E) NIM houses the Llama 4 Scout 17B 16E instruct model and delivers a multimodal VLM designed for efficient image understanding and reasoning tasks. It leverages the Llama 4 architecture with a focus on fast, high-throughput inference, making it ideal for use cases requiring grounded visual comprehen sion at scale. The service supports vision-language inputs and outputs detailed language reasoning based on visual context. This NIM enables users to self-host the Llama 4 Scout model in enterprise environments for advanced vision-language use cases utilizing accelerated inference with vLLM..

The container components are ready for commercial/non-commercial use.

License/Terms of Use:

GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products. Your use of this model is governed by the NVIDIA Community Model License. ADDITIONAL INFORMATION: Llama 4 Community License Agreement. Built with Llama.

Deployment Geography:

Global, except EU

Release Date:

Build.Nvidia.com 04/05/2025 via https://build.nvidia.com/meta/llama-4-scout-17b-16e-instruct/
Huggingface 04/05/2025 via https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct

Llama 4 Scout 17B 16E Instruct:

The Llama 4 Scout 17B 16E Instruct Container includes the following model:

Model Name & Link Use Case How to Pull the Model
Llama 4 Scout 17B 16E Instruct Llama 4 is intended for commercial and research use in multiple languages. For assistant-like chat and visual reasoning tasks. For visual recognition, image reasoning, captioning, and answering general questions about an image. Automatic

Deployment Details:

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

For information on how to deploy this NIM, please visit - Get started

Reference(s):

N/A

Container Version(s):

nvcr.io/nim/nvidia/meta-llama-4-scout-17b-16e-instruct:1.3.0

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal developer team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.

Please report security vulnerabilities or NVIDIA AI Concerns here.

You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.