Back to Search
Start Over
Information Dropout: Learning Optimal Representations Through Noisy Computation.
- Source :
- IEEE Transactions on Pattern Analysis & Machine Intelligence; 12/1/2018, Vol. 40 Issue 12, p2897-2905, 9p
- Publication Year :
- 2018
-
Abstract
- The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of optimal disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that Information Dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 01628828
- Volume :
- 40
- Issue :
- 12
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Pattern Analysis & Machine Intelligence
- Publication Type :
- Academic Journal
- Accession number :
- 132893956
- Full Text :
- https://doi.org/10.1109/TPAMI.2017.2784440