Linux / amd64
MLPerf Inference is a benchmark suite for measuring how fast systems can run models in a variety of deployment scenarios. MLPerf Inference provides the base containers to enable people interested in NVIDIA’s MLPerf Inference submission to reproduce NVIDIA’s leading results. Containers included are sorely for benchmarking purposes and should not be used in any production environment.
For details of how to reproduce NVIDIA's results, please visit ML Commons github page and check NVIDIA's submission repo's readme file.
The user license has been include under the container's root directory as /NVIDIA_MLPerf_Evaluation_License. By downloading this container, you agree to follow all the requirements stated in the EULA.
MLCommons offical webpage: https://mlcommons.org/en/