NGC | Catalog
CatalogContainersNVIDIA MLPerf Inference

NVIDIA MLPerf Inference

Logo for NVIDIA MLPerf Inference
Features
Description
MLPerf Inference containers are base containers for people interested in NVIDIA's MLPerf Inference submission results
Publisher
NVIDIA
Latest Tag
mlpinf-v4.0-cuda11.4-cudnn8.6-aarch64-orin-public
Modified
April 6, 2024
Compressed Size
6.71 GB
Multinode Support
No
Multi-Arch Support
No
mlpinf-v4.0-cuda11.4-cudnn8.6-aarch64-orin-public (Latest) Security Scan Results

Linux / arm64

Sorry, your browser does not support inline SVG.

MLPerf Inference NVIDIA-Optimized Implementations

MLPerf Inference is a benchmark suite for measuring how fast systems can run models in a variety of deployment scenarios. MLPerf Inference provides the base containers to enable people interested in NVIDIA’s MLPerf Inference submission to reproduce NVIDIA’s leading results. Containers included are sorely for benchmarking purposes and should not be used in any production environment.

Getting Started

For details of how to reproduce NVIDIA's results, please visit ML Commons github page and check NVIDIA's submission repo's readme file.

EULA

The user license has been include under the container's root directory as /NVIDIA_MLPerf_Evaluation_License. By downloading this container, you agree to follow all the requirements stated in the EULA.

Misc

MLCommons offical webpage: https://mlcommons.org/en/