Back to Search Start Over

Active Testing: Sample-Efficient Model Evaluation

Authors :
Kossen, Jannik
Farquhar, Sebastian
Gal, Yarin
Rainforth, Tom
Publication Year :
2021

Abstract

We introduce a new framework for sample-efficient model evaluation that we call active testing. While approaches like active learning reduce the number of labels needed for model training, existing literature largely ignores the cost of labeling test data, typically unrealistically assuming large test sets for model evaluation. This creates a disconnect to real applications, where test labels are important and just as expensive, e.g. for optimizing hyperparameters. Active testing addresses this by carefully selecting the test points to label, ensuring model evaluation is sample-efficient. To this end, we derive theoretically-grounded and intuitive acquisition strategies that are specifically tailored to the goals of active testing, noting these are distinct to those of active learning. As actively selecting labels introduces a bias; we further show how to remove this bias while reducing the variance of the estimator at the same time. Active testing is easy to implement and can be applied to any supervised machine learning method. We demonstrate its effectiveness on models including WideResNets and Gaussian processes on datasets including Fashion-MNIST and CIFAR-100.<br />Comment: Published at the 38th International Conference on Machine Learning (ICML 2021)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2103.05331
Document Type :
Working Paper