NGC Catalog
CLASSIC
Welcome Guest
Containers
NVIDIA MTBench

NVIDIA MTBench

For copy image paths and more information, please view on a desktop device.
Description
NVIDIA Evals Factory-compatible container with MTBench support
Publisher
NVIDIA
Latest Tag
25.06.1
Modified
July 9, 2025
Compressed Size
3.07 GB
Multinode Support
No
Multi-Arch Support
No
25.06.1 (Latest) Security Scan Results
No results available.

NVIDIA Evals Factory

The goal of NVIDIA Evals Factory is to advance and refine state-of-the-art methodologies for model evaluation, and deliver them as modular evaluation packages (evaluation containers and pip wheels) that teams can use as standardized building blocks.

Quick start guide

NVIDIA Evals Factory provide you with evaluation clients, that are specifically built to evaluate model endpoints using our Standard API.

Launching an evaluation for an LLM

  1. Install the package

    pip install nvidia-mtbench-evaluator
    
  2. (Optional) Set a token to your API endpoint if it's protected

    export MY_API_KEY="your_api_key_here"
    
  3. List the available evaluations:

    $ core_evals_mtbench ls
    Available tasks:
    * mtbench (in mtbench)
    * mtbench-cor1 (in mtbench)
    
  4. Run the evaluation of your choice:

    core_evals_mtbench run_eval \
        --eval_type mtbench-cor1 \
        --model_id meta/llama-3.1-70b-instruct \
        --model_url https://integrate.api.nvidia.com/v1/chat/completions \
        --model_type chat \
        --api_key_name MY_API_KEY \
        --output_dir /workspace/results
    
  5. Gather the results

    cat /workspace/results/results.yml
    

Command-Line Tool

Each package comes pre-installed with a set of command-line tools, designed to simplify the execution of evaluation tasks. Below are the available commands and their usage for the nvidia_mtbench_evaluator:

Commands

1. List Evaluation Types

core_evals_mtbench ls

Displays the evaluation types available within the mtbench.

2. Run an evaluation

The core_evals_mtbench run_eval command executes the evaluation process. Below are the flags and their descriptions:

Required flags

  • --eval_type <string> The type of evaluation to perform
  • --model_id <string> The name or identifier of the model to evaluate.
  • --model_url <url> The API endpoint where the model is accessible.
  • --model_type <string> The type of the model to evaluate, currently either "chat", "completions", or "vlm".
  • --output_dir <directory> The directory to use as the working directory for the evaluation. The results, including the results.yml output file, will be saved here.

Optional flags

  • --api_key_name <string> The name of the environment variable that stores the Bearer token for the API, if authentication is required.
  • --run_config <path> Specifies the path to a YAML file containing the evaluation definition.

Example

core_evals_mtbench run_eval \
    --eval_type mtbench \
    --model_id my_model \
    --model_type chat \
    --model_url http://localhost:8000 \
    --output_dir ./evaluation_results

If the model API requires authentication, set the API key in an environment variable and reference it using the --api_key_name flag:

export MY_API_KEY="your_api_key_here"

core_evals_mtbench run_eval \
    --eval_type mtbench \
    --model_id my_model \
    --model_type chat \
    --model_url http://localhost:8000 \
    --api_key_name MY_API_KEY \
    --output_dir ./evaluation_results

Configuring evaluations via YAML

Evaluations in NVIDIA Evals Factory are configured using YAML files that define the parameters and settings required for the evaluation process. These configuration files follow a standard API which ensures consistency across evaluations.

Example of a YAML config:

config:
  type: mtbench
  params:
    parallelism: 50
    limit_samples: 20
    extra:
      judge:
        model_id: "gpt-4"
        top_p: 0.0001
target:
  api_endpoint:
    model_id: meta/llama-3.1-8b-instruct
    type: chat
    url: https://integrate.api.nvidia.com/v1/chat/completions
    api_key: NVIDIA_API_KEY

The priority of overrides is as follows:

  1. command line arguments
  2. user config (as seen above)
  3. task defaults (defined per task type)
  4. framework defaults

--dry_run option allows you to print the final run configuration and command without executing the evaluation.

Example:

core_evals_mtbench run_eval \
    --eval_type mtbench \
    --model_id my_model \
    --model_type chat \
    --model_url http://localhost:8000 \
    --output_dir .evaluation_results \
    --dry_run

Output:

Rendered config:

command: 'mtbench-evaluator {% if target.api_endpoint.model_id is not none %} --model
  {{target.api_endpoint.model_id}}{% endif %} {% if target.api_endpoint.url is not
  none %} --url {{target.api_endpoint.url}}{% endif %} {% if target.api_endpoint.api_key
  is not none %} --api_key {{target.api_endpoint.api_key}}{% endif %} {% if config.params.request_timeout
  is not none %} --timeout {{config.params.request_timeout}}{% endif %} {% if config.params.max_retries
  is not none %} --max_retries {{config.params.max_retries}}{% endif %} {% if config.params.parallelism
  is not none %} --parallelism {{config.params.parallelism}}{% endif %} {% if config.params.max_new_tokens
  is not none %} --max_tokens {{config.params.max_new_tokens}}{% endif %} --workdir
  {{config.output_dir}} {% if config.params.temperature is not none %} --temperature
  {{config.params.temperature}}{% endif %} {% if config.params.top_p is not none %}
  --top_p {{config.params.top_p}}{% endif %} {% if config.params.extra.args is defined
  %} {{config.params.extra.args}} {% endif %} {% if config.params.limit_samples is
  not none %}--first_n {{config.params.limit_samples}}{% endif %} --generate --judge
  {% if config.params.extra.judge.url is not none %} --judge_url {{config.params.extra.judge.url}}{%
  endif %} {% if config.params.extra.judge.model_id is not none %} --judge_model {{config.params.extra.judge.model_id}}{%
  endif %} {% if config.params.extra.judge.api_key is not none %} --judge_api_key_name
  {{config.params.extra.judge.api_key}}{% endif %} {% if config.params.extra.judge.request_timeout
  is not none %} --judge_request_timeout {{config.params.extra.judge.request_timeout}}{%
  endif %} {% if config.params.extra.judge.max_retries is not none %} --judge_max_retries
  {{config.params.extra.judge.max_retries}}{% endif %} {% if config.params.extra.judge.temperature
  is not none %} --judge_temperature {{config.params.extra.judge.temperature}}{% endif
  %} {% if config.params.extra.judge.top_p is not none %} --judge_top_p {{config.params.extra.judge.top_p}}{%
  endif %} {% if config.params.extra.judge.max_tokens is not none %} --judge_max_tokens
  {{config.params.extra.judge.max_tokens}}{% endif %}     '
framework_name: mtbench
pkg_name: mtbench_evaluator
config:
  output_dir: .evaluation_results
  params:
    limit_samples: null
    max_new_tokens: 1024
    max_retries: 5
    parallelism: 10
    task: mtbench
    temperature: null
    request_timeout: 30
    top_p: null
    extra:
      judge:
        url: null
        model_id: gpt-4
        api_key: null
        request_timeout: 60
        max_retries: 16
        temperature: 0.0
        top_p: 0.0001
        max_tokens: 2048
  supported_endpoint_types:
  - chat
  type: mtbench
target:
  api_endpoint:
    api_key: null
    model_id: my_model
    stream: null
    type: chat
    url: http://localhost:8000


Rendered command:

mtbench-evaluator  --model my_model  --url http://localhost:8000   --timeout 30  --max_retries 5  --parallelism 10  --max_tokens 1024 --workdir .evaluation_results     --generate --judge   --judge_model gpt-4   --judge_request_timeout 60  --judge_max_retries 16  --judge_temperature 0.0  --judge_top_p 0.0001  --judge_max_tokens 2048

FAQ

Deploying a model as an endpoint

NVIDIA Evals Factory utilize a client-server communication architecture to interact with the model. As a prerequisite, the model must be deployed as an endpoint with a NIM-compatible API.

Users have the flexibility to deploy their model using their own infrastructure and tooling.

Servers with APIs that conform to the OpenAI/NIM API standard are expected to work seamlessly out of the box.