Back to Search
Start Over
Regularizing Deep Neural Networks by Enhancing Diversity in Feature Extraction.
- Source :
- IEEE Transactions on Neural Networks & Learning Systems; Sep2019, Vol. 30 Issue 9, p2650-2661, 12p
- Publication Year :
- 2019
-
Abstract
- This paper proposes a new and efficient technique to regularize the neural network in the context of deep learning using correlations among features. Previous studies have shown that oversized deep neural network models tend to produce a lot of redundant features that are either the shifted version of one another or are very similar and show little or no variations, thus resulting in redundant filtering. We propose a way to address this problem and show that such redundancy can be avoided using regularization and adaptive feature dropout mechanism. We show that regularizing both negative and positive correlated features according to their differentiation and based on their relative cosine distances yields network extracting dissimilar features with less overfitting and better generalization. This concept is illustrated with deep multilayer perceptron, convolutional neural network, sparse autoencoder, gated recurrent unit, and long short-term memory on MNIST digits recognition, CIFAR-10, ImageNet, and Stanford Natural Language Inference data sets. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 2162237X
- Volume :
- 30
- Issue :
- 9
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Neural Networks & Learning Systems
- Publication Type :
- Periodical
- Accession number :
- 138255950
- Full Text :
- https://doi.org/10.1109/TNNLS.2018.2885972