Back to Search Start Over

Progress in Self-Certified Neural Networks

Authors :
Perez-Ortiz, Maria
Rivasplata, Omar
Parrado-Hernandez, Emilio
Guedj, Benjamin
Shawe-Taylor, John
University College of London [London] (UCL)
Universidad Carlos III de Madrid [Madrid] (UC3M)
Inria-CWI (Inria-CWI)
Centrum Wiskunde & Informatica (CWI)-Institut National de Recherche en Informatique et en Automatique (Inria)
MOdel for Data Analysis and Learning (MODAL)
Inria Lille - Nord Europe
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Paul Painlevé - UMR 8524 (LPP)
Centre National de la Recherche Scientifique (CNRS)-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université de Lille-Evaluation des technologies de santé et des pratiques médicales - ULR 2694 (METRICS)
Université de Lille-Centre Hospitalier Régional Universitaire [Lille] (CHRU Lille)-Université de Lille-Centre Hospitalier Régional Universitaire [Lille] (CHRU Lille)-École polytechnique universitaire de Lille (Polytech Lille)-Université de Lille, Sciences et Technologies
The Inria London Programme (Inria-London)
Computer science department [University College London] (UCL-CS)
University College of London [London] (UCL)-University College of London [London] (UCL)-Institut National de Recherche en Informatique et en Automatique (Inria)
This work is also supported by the U.S. Army Research Laboratory and the U. S. Army Research Office, and by the U.K. Ministry of Defence and the U.K. Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/R013616/1.
European Project: 820437,H2020-EU.1.2.3. - FET Flagships ,Humane AI (2019)
Laboratoire Paul Painlevé (LPP)
Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université de Lille, Sciences et Technologies-Inria Lille - Nord Europe
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Evaluation des technologies de santé et des pratiques médicales - ULR 2694 (METRICS)
Université de Lille-Centre Hospitalier Régional Universitaire [Lille] (CHRU Lille)-Université de Lille-Centre Hospitalier Régional Universitaire [Lille] (CHRU Lille)-École polytechnique universitaire de Lille (Polytech Lille)
Department of Computer science [University College of London] (UCL-CS)
Source :
NeurIPS 2021-Conference on Neural Information Processing Systems. Session Workshop : Bayesian Deep Learning, NeurIPS 2021-Conference on Neural Information Processing Systems. Session Workshop : Bayesian Deep Learning, Dec 2021, Virtual, United Kingdom, NeurIPS workshop on Bayesian Deep Learnin
Publication Year :
2021
Publisher :
HAL CCSD, 2021.

Abstract

A learning method is self-certified if it uses all available data to simultaneously learn a predictor and certify its quality with a tight statistical certificate that is valid on unseen data. Recent work has shown that neural network models trained by optimising PAC-Bayes bounds lead not only to accurate predictors, but also to tight risk certificates, bearing promise towards achieving self-certified learning. In this context, learning and certification strategies based on PAC-Bayes bounds are especially attractive due to their ability to leverage all data to learn a posterior and simultaneously certify its risk with a tight numerical certificate. In this paper, we assess the progress towards self-certification in probabilistic neural networks learnt by PAC-Bayes inspired objectives. We empirically compare (on 4 classification datasets) classical test set bounds for deterministic predictors and a PAC-Bayes bound for randomised self-certified predictors. We first show that both of these generalisation bounds are not too far from out-of-sample test set errors. We then show that in data starvation regimes, holding out data for the test set bounds adversely affects generalisation performance, while self-certified strategies based on PAC-Bayes bounds do not suffer from this drawback, proving that they might be a suitable choice for the small data regime. We also find that probabilistic neural networks learnt by PAC-Bayes inspired objectives lead to certificates that can be surprisingly competitive with commonly used test set bounds.

Details

Language :
English
Database :
OpenAIRE
Journal :
NeurIPS 2021-Conference on Neural Information Processing Systems. Session Workshop : Bayesian Deep Learning, NeurIPS 2021-Conference on Neural Information Processing Systems. Session Workshop : Bayesian Deep Learning, Dec 2021, Virtual, United Kingdom, NeurIPS workshop on Bayesian Deep Learnin
Accession number :
edsair.doi.dedup.....068bc6349640882eaa54ad2e98ba0538