Back to Search
Start Over
Learning PAC-Bayes Priors for Probabilistic Neural Networks
- Publication Year :
- 2021
- Publisher :
- HAL CCSD, 2021.
-
Abstract
- Recent works have investigated deep learning models trained by optimising PAC-Bayes bounds, with priors that are learnt on subsets of the data. This combination has been shown to lead not only to accurate classifiers, but also to remarkably tight risk certificates, bearing promise towards self-certified learning (i.e. use all the data to learn a predictor and certify its quality). In this work, we empirically investigate the role of the prior. We experiment on 6 datasets with different strategies and amounts of data to learn data-dependent PAC-Bayes priors, and we compare them in terms of their effect on test performance of the learnt predictors and tightness of their risk certificate. We ask what is the optimal amount of data which should be allocated for building the prior and show that the optimum may be dataset dependent. We demonstrate that using a small percentage of the prior-building data for validation of the prior leads to promising results. We include a comparison of underparameterised and overparameterised models, along with an empirical study of different training objectives and regularisation strategies to learn the prior distribution.
- Subjects :
- FOS: Computer and information sciences
Computer Science - Machine Learning
[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG]
[STAT.ML]Statistics [stat]/Machine Learning [stat.ML]
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH]
Machine Learning (cs.LG)
Subjects
Details
- Language :
- English
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....83b31abd1ece13d412ee74fbe1c4f10e