NGC Catalog
CLASSIC
Welcome Guest
Containers
llama-3.2-nemoretriever-300m-embed-v2

llama-3.2-nemoretriever-300m-embed-v2

For copy image paths and more information, please view on a desktop device.
Associated Products
Features
Description
World-class multilingual and cross-lingual question-answering retrieval.
Publisher
NVIDIA
Latest Tag
latest
Modified
September 5, 2025
Compressed Size
3.3 GB
Multinode Support
No
Multi-Arch Support
Yes
latest (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

Linux / arm64

Sorry, your browser does not support inline SVG.

Model Overview

Description

The Llama 3.2 NeMo Retriever Embedding 300M model version 2 is optimized for multilingual and cross-lingual text question-answering retrieval with support for long documents (up to 8192 tokens). This model was evaluated on 26 languages: English, Arabic, Bengali, Chinese, Czech, Danish, Dutch, Finnish, French, German, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Norwegian, Persian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, and Turkish.

In addition to enabling multilingual and cross-lingual question-answering retrieval, this model reduces the data storage footprint through dynamic embedding sizing and support for longer token length, making it feasible to handle large-scale datasets efficiently.

An embedding model is a crucial component of a text retrieval system, as it transforms textual information into dense vector representations. They are typically transformer encoders that process tokens of input text (for example: question, passage) to output an embedding.

The Llama 3.2 NeMo Retriever Embedding 300M model version 2 is a part of the NVIDIA NeMo Retriever collection of NIMs, which provide state-of-the-art, commercially-ready models and microservices, optimized for the lowest latency and highest throughput. It features a production-ready information retrieval pipeline with enterprise support. The models that form the core of this solution have been trained using responsibly selected, auditable data sources. With multiple pre-trained models available as starting points, developers can also readily customize them for domain-specific use cases, such as information technology, human resource help assistants, and research & development research assistants.

This model is ready for commercial use.

License/Terms of use

GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreement and Product-Specific Terms for AI Products. Use of this model is governed by the NVIDIA Community Model License.

ADDITIONAL INFORMATION: Llama 3.2 Community License Agreement. Built with Llama.

You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.

Deployment Geography:

Global

Use Case:

The Llama 3.2 NeMo Retriever Embedding 300M model version 2 is most suitable for users who want to build a multilingual question-and-answer application over a large text corpus, leveraging the latest dense retrieval technologies.

Release Date:

NGC: Available August 28, 2025

Model Architecture

Architecture Type: Transformer
Network Architecture: Fine-tuned Llama3.2 300M Retriever

This NeMo Retriever embedding model is a transformer encoder with 9 layers and an embedding size of 2048 and has been pruned, distilled from Llama 3.2-nv-embedqa-1b-v1 model. After pruning and distillation, the model has been trained on public and synthetic datasets. The AdamW optimizer is employed incorporating 100 warm up steps and 5e-6 learning rate with WarmupDecayLR scheduler. Embedding models for text retrieval are typically trained using a bi-encoder architecture. This involves encoding a pair of sentences (for example, query and chunked passages) independently using the embedding model. Contrastive learning is used to maximize the similarity between the query and the passage that contains the answer, while minimizing the similarity between the query and sampled negative passages not useful to answer the question.

Computational Load:

Cumulative Compute: 6.67E+22 Estimated Energy and Emissions for Model Training: 259,500kWh | 107 tons CO2eq

This model's cumulative compute is dominated by the llama3.2-1b model training; estimates on the base model's compute and energy/emissions usage is sourced from epoch.ai and the llama3.2-1b model card.

Input

Property Query Document
Input Type Text Text
Input Format List of strings List of strings
Input Parameter 1D 1D
Other Properties The model's maximum context length is 8192 tokens. Texts longer than maximum length must either be chunked or truncated. The model's maximum context length is 8192 tokens. Texts longer than maximum length must either be chunked or truncated.

Output

Output Type: Floats
Output Format: List of floats
Output Parameters: 1D
Other Properties Related to Output: Model outputs embedding vectors of maximum dimension 2048 for each text string (can be configured based on 384, 512, 768, 1024, or 2048).

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (such as GPU cores) and software frameworks (such as CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration

Runtime Engine: NeMo Retriever embedding NIM
Supported Hardware Microarchitecture Compatibility: NVIDIA Ampere, NVIDIA Blackwell, NVIDIA Hopper, NVIDIA Lovelace
Supported Operating System(s): Linux

Model Version(s)

Llama 3.2 NeMo Retriever Embedding 300M v2
Short Name: llama-3.2-nemoretriever-300m-embed-v2

Training Dataset & Evaluation

Training Dataset

The development of large-scale public open-QA datasets has enabled tremendous progress in powerful embedding models. However, one popular dataset named MS MARCO restricts ‌commercial licensing, limiting the use of these models in commercial settings. To address this, NVIDIA created its own training dataset blend based on public QA datasets, which each have a license for commercial applications as well as synthetic QA datasets which were created using Llama 3.1 70b instruct. For long context retrieval, synthetic datasets were created using the same methodology as the MLDR train datasets (https://huggingface.co/datasets/Shitao/MLDR).

Data Collection Method by dataset: Hybrid: Automated, Human, Synthetic
Labeling Method by dataset: Hybrid: Automated, Human, Synthetic
Properties: Semi-supervised pre-training on 12M samples from public datasets and fine-tuning on 1M samples from public and synthetic datasets .

Evaluation Datasets

We evaluated the NeMo Retriever embedding model in comparison to literature open & commercial retriever models on academic benchmarks for question-answering - NQ, HotpotQA and FiQA (Finance Q&A) from BeIR benchmark and TechQA dataset. Note that the model was evaluated offline on A100 GPUs using the model's PyTorch checkpoint. In this benchmark, the metric used was Recall@5.

We also evaluated the multilingual capabilities on the academic benchmark MIRACL across 15 languages and translated the English and Spanish version of MIRACL into additional 11 languages. The reported scores are based on an internal version of MIRACL by selecting hard negatives for each query to reduce the corpus size.

We evaluated the capabilities on the academic benchmark MLQA based on 7 languages (Arabic, Chinese, English, German, Hindi, Spanish, Vietnamese). We consider only evaluation datasets when the query and documents are in same languages.

We evaluated the support of long documents on the academic benchmark Multilingual Long-Document Retrieval (MLDR) built on Wikipedia and mC4, covering 12 typologically diverse languages. The English version has a median length of 2399 tokens and 90th percentile of 7483 tokens using the llama 3.2 tokenizer. The MLDR dataset is based on synthetic generated questions with a LLM, which has the tendency to create questions with similar keywords than the positive document, but might not be representative for real user queries. This characteristic of the dataset benefits sparse embeddings like BM25.

Data Collection Method by dataset: Hybrid: Automated, Human, Synthetic
Labeling Method by dataset: Hybrid: Automated, Human, Synthetic
Properties: The evaluation datasets are based on MTEB/BEIR, TextQA, TechQA, MIRACL, MLQA, and MLDR. The size ranges between 10,000s up to 5M depending on the dataset.

Evaluation Results

Open & Commercial Retrieval Models Average Recall@5 on NQ, HotpotQA, FiQA, TechQA dataset
llama-3.2-nemoretriever-300m-embed-v2 (embedding dim 2048) 62.92%
intfloat/multilingual-e5-large 61.23%
Snowflake/snowflake-arctic-embed-l-v2.0 60.9%
Alibaba-NLP/gte-multilingual-base 57.09%
BAAI/bge-m3 57.84%
nv-embedqa-e5-v5 62.07%
e5-large-unsupervised 48.03%
BM25 44.67%
Open & Commercial Retrieval Models Average Recall@5 on MIRACL (multilingual)
llama-3.2-nemoretriever-300m-embed-v2 (embedding dim 2048) 66.12%
intfloat/multilingual-e5-large 64.27%
Snowflake/snowflake-arctic-embed-l-v2.0 60.28%
Alibaba-NLP/gte-multilingual-base 63.27%
BAAI/bge-m3 67.67%
BM25 26.51%
Open & Commercial Retrieval Models Average Recall@5 on MLQA dataset with different languages
llama-3.2-nemoretriever-300m-embed-v2 (embedding dim 2048) 75.91%
intfloat/multilingual-e5-large 77.21%
Snowflake/snowflake-arctic-embed-l-v2.0 53.34%
Alibaba-NLP/gte-multilingual-base 71.08%
BAAI/bge-m3 74.21%
BM25 13.01%
Open & Commercial Retrieval Models Average Recall@5 on MLDR
llama-3.2-nemoretriever-300m-embed-v2 (embedding dim 2048) 53.27%
intfloat/multilingual-e5-large 38.46%
Snowflake/snowflake-arctic-embed-l-v2.0 36.42%
Alibaba-NLP/gte-multilingual-base 62.13%
BAAI/bge-m3 57.85%
BM25 71.39%

Inference
Engine: TensorRT
Test Hardware: H100 PCIe/SXM, A100 PCIe/SXM, L40s, L4, and A10G

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case, and address unforeseen product misuse.

For more detailed information on ethical considerations for this model, see the Model Card++ tab for the Explainability, Bias, Safety & Security, and Privacy subcards.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Get Help

Enterprise Support

Get access to knowledge base articles and support cases or submit a ticket at the NVIDIA AI Enterprise Support Services page..

NVIDIA NIM Documentation

Visit the NeMo Retriever docs page for release documentation, deployment guides and more.