Back to Search Start Over

The Vision Behind MLPerf: Understanding AI Inference Performance.

The Vision Behind MLPerf: Understanding AI Inference Performance.

Authors :
Reddi, Vijay Janapa
Cheng, Christine
Kanter, David
Mattson, Peter
Schmuelling, Guenther
Wu, Carole-Jean
Source :
IEEE Micro. May/Jun2021, Vol. 41 Issue 3, p10-18. 9p.
Publication Year :
2021

Abstract

Deep learning has sparked a renaissance in computer systems and architecture. Despite the breakneck pace of innovation, there is a crucial issue concerning the research and industry communities at large: how to enable neutral and useful performance assessment for machine learning (ML) software frameworks, ML hardware accelerators, and ML systems comprising both the software stack and the hardware. The ML field needs systematic methods for evaluating performance that represents real-world use cases and useful for making comparisons across different software and hardware implementations. MLPerf answers the call. MLPerf is an ML benchmark standard driven by academia and industry (70+ organizations). Built out of the expertise of multiple organizations, MLPerf establishes a standard benchmark suite with proper metrics and benchmarking methodologies to level the playing field for ML system performance measurement of different ML inference hardware, software, and services. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02721732
Volume :
41
Issue :
3
Database :
Academic Search Index
Journal :
IEEE Micro
Publication Type :
Academic Journal
Accession number :
150557696
Full Text :
https://doi.org/10.1109/MM.2021.3066343