1. Accounting for variance in machine learning benchmarks
- Author
-
Bouthillier, Xavier, Delaunay, Pierre, Bronzi, Mirko, Trofimov, Assya, Nichyporuk, Brennan, Szeto, Justin, Sepah, Naz, Raff, Edward, Madan, Kanika, Voleti, Vikram, Kahou, Samira Ebrahimi, Michalski, Vincent, Serdyuk, Dmitriy, Arbel, Tal, Pal, Chris, Varoquaux, Ga��l, Vincent, Pascal, Montreal Institute for Learning Algorithms [Montréal] (MILA), Centre de Recherches Mathématiques [Montréal] (CRM), Université de Montréal (UdeM)-Université de Montréal (UdeM), Université de Montréal (UdeM), Independent Author, Bioinformatics platform [UdeM-Montréal] (IRIC), Institut de Recherche en Immunologie et en Cancérologie [UdeM-Montréal] (IRIC), Center for Intelligent Machines [Montréal], McGill University = Université McGill [Montréal, Canada], University of Maryland [Baltimore County] (UMBC), University of Maryland System, Ecole de Technologie Supérieure [Montréal] (ETS), Canadian Institute for Advanced Research (CIFAR), École Polytechnique de Montréal (EPM), Modelling brain structure, function and variability based on high-field MRI data (PARIETAL), Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Service NEUROSPIN (NEUROSPIN), Université Paris-Saclay-Direction de Recherche Fondamentale (CEA) (DRF (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay-Direction de Recherche Fondamentale (CEA) (DRF (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), ANR-17-CE23-0018,DirtyData,Intégration et nettoyage de données pour l'analyse statistique(2017), Service NEUROSPIN (NEUROSPIN), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Inria Saclay - Ile de France, and Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST] ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] - Abstract
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons., Submitted to MLSys2021
- Published
- 2021