Back to Search Start Over

Dropout with Expectation-linear Regularization

Authors :
Ma, Xuezhe
Gao, Yingkai
Hu, Zhiting
Yu, Yaoliang
Deng, Yuntian
Hovy, Eduard
Publication Year :
2016

Abstract

Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout's training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an \emph{explicit} control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently.<br />Comment: Published as a conference paper at ICLR 2017. Camera-ready Version. 23 pages (paper + appendix)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1609.08017
Document Type :
Working Paper