Back to Search Start Over

Gradient Regularization as Approximate Variational Inference

Authors :
Ali Unlu
Laurence Aitchison
Source :
Entropy, Vol 23, Iss 12, p 1629 (2021)
Publication Year :
2021
Publisher :
MDPI AG, 2021.

Abstract

We developed Variational Laplace for Bayesian neural networks (BNNs), which exploits a local approximation of the curvature of the likelihood to estimate the ELBO without the need for stochastic sampling of the neural-network weights. The Variational Laplace objective is simple to evaluate, as it is the log-likelihood plus weight-decay, plus a squared-gradient regularizer. Variational Laplace gave better test performance and expected calibration errors than maximum a posteriori inference and standard sampling-based variational inference, despite using the same variational approximate posterior. Finally, we emphasize the care needed in benchmarking standard VI, as there is a risk of stopping before the variance parameters have converged. We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.

Details

Language :
English
ISSN :
23121629 and 10994300
Volume :
23
Issue :
12
Database :
Directory of Open Access Journals
Journal :
Entropy
Publication Type :
Academic Journal
Accession number :
edsdoj.78d9df94cab4b1b9001c7ce880ee52b
Document Type :
article
Full Text :
https://doi.org/10.3390/e23121629