NGC Catalog
CLASSIC
Welcome Guest
Containers
gemma-3-1b-it

gemma-3-1b-it

For copy image paths and more information, please view on a desktop device.
Associated Products
Features
Description
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.
Publisher
NVIDIA
Latest Tag
latest
Modified
August 15, 2025
Compressed Size
12 GB
Multinode Support
No
Multi-Arch Support
Yes
latest (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

Linux / arm64

Sorry, your browser does not support inline SVG.

Gemma-3-1B-IT Overview

Description:

This container houses the Gemma-3-1B-IT, which generates text responses for a variety of conversational and instruction-following tasks. As an instruction-tuned model from Google's Gemma 3 family, it has been specifically fine-tuned to be helpful and safe in dialogue applications. It utilizes a Mixture of Experts (MoE) architecture with a total of 3 billion parameters, but only activates approximately 1.1 billion parameters per token, making it highly efficient for its size and capability. This open-weight model is designed for developers and researchers to build applications on a capable yet resource-conscious foundation.

The container components are ready for commercial/non-commercial use.

Third-Party Community Consideration

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA[google/gemma-3-1b-it]
(google/gemma-3-1b-it · Hugging Face).

License/Terms of Use:

GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products; and the use of this model is governed by the NVIDIA Community License.

ADDITIONAL INFORMATION: Gemma Terms of Use.

You are responsible for ensuring that your use of NVIDIA provided models complies with all applicable laws.

Deployment Geography:

Global

Release Date:

Build.Nvidia.com 03/12/2025 via
gemma-3-1b-it Model by Google | NVIDIA NIM

Github 03/12/2025 via
https://github.com/google-deepmind/gemma

Huggingface 03/12/2025 via
google/gemma-3-1b-it · Hugging Face

Gemma-3-1B-IT

Gemma-3-1B-IT Container includes the following model:

Model Name & Link Use Case How to Pull the Model
Gemma-3-1B-IT A lightweight, instruction-tuned model for building conversational AI applications like chatbots, Q&A systems, and content summarizers. Automatic

Deployment Details:

Visit the NIM Container LLM page for release documentation, deployment guides, and more.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Container Version(s):

nvcr.io/nvstaging/nim/gemma-3-1b-it:1.12.0-32639017

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal developer team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.