NGC Catalog
CLASSIC
Welcome Guest
Models
Nemovision-4B-v2-Instruct

Nemovision-4B-v2-Instruct

For downloads and more information, please view on a desktop device.
Logo for Nemovision-4B-v2-Instruct
Associated Products
Features
Description
Generates responses for roleplaying, retrieval augmented generation, and function calling with vision understanding and reasoning capabilities
Publisher
NVIDIA
Latest Version
Nemovision-4B-v2-Instruct
Modified
January 27, 2025
Size
9.05 GB

Model Overview

Description:

The Nemovision-4B-v2-Instruct model uses Mistral-NeMo-Minitron-4B-Instruct language model and RADIO vision encoder to be performant on a broad range of RTX GPUs with the accuracy developers need. The vision language model is based on VILA VLM architecture and trained with the VILA and NeMo frameworks and datasets. This is a model for generating responses for roleplaying, retrieval augmented generation, and function calling with vision understanding and reasoning capabilities. This model is ready for commercial use.

License/Terms of Use

The use of this model is governed by the NVIDIA Community Model License

Model Architecture:

Architecture Type: Transformer
Network Architecture

  • Vision Encoder: radio:768:nvidia/C-RADIO
  • Language Encoder: MN-Minitron-4B-128k-Instruct

Input

Input Type(s): Video, Image(s), Text
Input Format(s): Video (.mp4), Image (Red, Green, Blue (RGB)), and Text (String)
Input Parameters: Video (3D), Image (2D), Text (1D)
Other Properties Related to Input: The model has a maximum of 8192 input tokens.

Output

Output Type(s): Text
Output Format(s): String
Output Parameters: 1D
Other Properties Related to Input: The model has a maximum of 8192 input tokens. Maximum output for both versions can be set apart from input.

Prompt Format:

Single Turn

<s>System
{system prompt}</s>
<s>User
<image>
{prompt}</s>
<s>Assistant\n
<s>System
{system prompt}</s>
<s>User
{prompt}</s>
<s>Assistant\n

Multi-image

<s>System
{system prompt}</s>
<s>User
<image>
<image>
<image>
{prompt}</s>
<s>Assistant\n

Multi-Turn or Few-shot

<s>System
{system prompt}</s>
<AVAILABLE_TOOLS>[...]</AVAILABLE_TOOLS></s>
<s>User
{prompt}</s>
<s>Assistant
<TOOLCALL>[ ... ]</TOOLCALL></s>
<s>User
{prompt}</s>
<s>Assistant\n

Software Integration:

Runtime(s): AI Inference Manager (NVAIM) Version 1.0.0

Supported Hardware Microarchitecture Compatibility: GPU supporting DirectX 11/12 and Vulkan 1.2 or higher

[Preferred/Supported] Operating System(s):

  • Windows

Software Integration: (Cloud)

[Preferred/Supported] Operating System(s):

  • Linux

Training & Evaluation:

Training Dataset:

NV-Pretraining and NV-VILA-SFT data were used. Additionally,the following datasets were used:

  • OASST1
  • OASST2
  • Localized Narratives
  • TextCaps
  • TextVQA
  • RefCOCO
  • VQAv2
  • GQA
  • SynthDoG-en
  • A-OKVQ
  • WIT
  • CLEVR
  • CLEVR-X
  • CLEVR-Math
  • ScreenQA
  • WikiSQL
  • WikiTablQuestions
  • RenderedText
  • FinQA
  • TAT-QA
  • Dolly
  • Websight
  • RAVEN
  • VizWiz
  • Inter-GPS
  • YouCook2
  • ActivityNet Captions
  • Video Localized Narratives
  • CLEVRER
  • Perception Test
  • Next-QA

Data Collection Method by dataset:

  • Hybrid: Automated, Human

Labeling Method by dataset:

  • Hybrid: Automated, Human

Properties:

NV-Pretraining data was collected from 5M subsampled NV-CLIP dataset. Stage 3 NV-SFT data has 2.8M images and 3.58M annotations on images that only have commercial license. Additionally, 355K videos with commercial license and 400K annotations on videos were used.

Evaluation Dataset:

Data Collection Method by dataset:

  • Hybrid: Human, Automatic/Sensors

Labeling Method by dataset:

  • Hybrid: Human, Automatic/Sensors

Properties:

A collection of different benchmarks, including academic VQA benchmarks and recent benchmarks specifically proposed for language understanding and reasoning, instruction-following, and function calling LMMs.

  • GQA
  • ScienceQA Image
  • Text VQA
  • POPE
  • MME
  • SEED-Bench
  • MMMU
  • Video MME
  • Egoschema
  • Perception Test
  • IF-Eval

Image Benchmarks

Benchmark GQA SQA Image Text VQA POPE (Popular) MME_sum SEED SEED Image MMMU val (beam 5)
Accuracy 60.78 76.1 75.48 88.33 1842.7 69.98 74 41.22

Video benchmarks

Benchmark VideoMME w/o Sub @32f VideoMME w/ Sub @32f Egoschema (val) Perception Test
Accuracy 53.11 57.7 58.6 65.63

Text Benchmarks

Benchmark IFEval MMLU(5-shot) GSM8K MBPP
Accuracy 54.34 64.98 63.76 59.14

Inference:

Framework:

  • PyTorch

Test Hardware:

  • H100
  • A100
  • A10g
  • L40s

Supported Hardware Platform(s): L40s, A10g, A100, H100

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards .

Please report security vulnerabilities or NVIDIA AI Concerns here.