Linux / amd64
nemoretriever-parse is a general purpose text-extraction model, specifically designed to handle documents. Given an image, nemoretriever-parse is able to extract formatted-text, with bounding-boxes and the corresponding semantic class. This has downstream benefits for several tasks such as increasing the availability of training-data for Large Language Models (LLMs), improving the accuracy of retriever systems, and enhancing document understanding pipelines.
You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.
GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreementand Product-Specific Terms for NVIDIA AI Products. Use of this model is governed by the NVIDIA Community Model License.
[1] https://huggingface.co/docs/transformers/en/model_doc/mbart
Transformer-based vision-encoder-decoder model
Vision Encoder: ViT-H model (https://huggingface.co/nvidia/C-RADIO) Adapter Layer: 1D convolutions & norms to compress dimensionality and sequence length of the latent space (1280 tokens to 320 tokens) Decoder: mBart [1] 10 blocks Tokenizer: Galactica (https://arxiv.org/abs/2211.09085); same as Nougat tokenizer
Input Type: Image, Text
Input Type(s): Red, Green, Blue (RGB) + Prompt (String)
Input Parameters: 2D, 1D
Other Properties Related to Input:
Max Input Resolution (Width, Height): 1648, 2048
Min Input Resolution (Width, Height): 1024, 1280
Channel Count: 3
Output Type: Text
Output Format: String
Output Parameters: 1D
Other Properties Related to Output: nemoretriever-parse output format is a string which encodes text content (formatted or not) as well as bounding boxes and class attributes.
Runtime Engine(s): PyTorch
Supported Hardware Platform(s): NVIDIA Hopper/NVIDIA Ampere/NVIDIA Turing
Supported Operating System(s): Linux
nemoretriever-parse: As part of this first release, we share the set of weights named overjoyed-adder.
nemoretriever-parse is first pre-trained on our internal datasets: human, synthetic and automated
Inference
Runtime Engine(s): PyTorch
Test Hardware: NVIDIA H100# Synchronization
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.
You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.