Back to Search Start Over

Embedded stacked group sparse autoencoder ensemble with L1 regularization and manifold reduction.

Authors :
Li, Yongming
Lei, Yan
Wang, Pin
Jiang, Mingfeng
Liu, Yuchuan
Source :
Applied Soft Computing; Mar2021, Vol. 101, pN.PAG-N.PAG, 1p
Publication Year :
2021

Abstract

Learning useful representations from original features is a key issue in classification tasks. Stacked autoencoders (SAEs) are easy to understand and realize, and they are powerful tools that learn deep features from original features, so they are popular for classification problems. The deep features can further combine the original features to construct more representative features for classification. However, existing SAEs do not consider the original features within the network structure and during training, so the deep features have low complementarity with the original features. To solve the problem, this paper proposes an embedded stacked group sparse autoencoder (ESGSAE) for more effective feature learning. Different from traditional stacked autoencoders, the ESGSAE model considers the complementarity between the original feature and the hidden outputs by embedding the original features into hidden layers. To alleviate the impact of the small sample problem on the generalization of the proposed ESGSAE model, an L 1 regularization-based feature selection strategy is designed to further improve the feature quality. After that, an ensemble model with support vector machine (SVM) and weighted local discriminant preservation projection (w_LPPD) is designed to further enhance the feature quality. Based on the designs above, an embedded stacked group sparse autoencoder ensemble with L 1 regularization and manifold reduction is proposed to obtain deep features with high complementarity in the context of the small sample problem. At the end of this paper, several representative public datasets are used for verification of the proposed algorithm. The results demonstrate that the ESGSAE ensemble model with L 1 regularization and manifold reduction yields superior performance compared to other existing and state-of-the-art feature learning algorithms, including some representative deep stacked autoencoder methods. Specifically, compared with the original features, the representative feature extraction algorithms and the improved autoencoders, the algorithm proposed in this paper can improve the classification accuracy by up to 13.33%, 7.33%, and 9.55%, respectively. The data and codes can be found in: https://share.weiyun.com/Jt7qeORm • A hybrid feature is embedded into the training process to construct a novel deep model. • A group sparsity constraint is introduced to obtain the sparse representations. • The ESGSAE ensemble model is constructed to obtain high complementary features. • A three-step feature learning mechanism is realized. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15684946
Volume :
101
Database :
Supplemental Index
Journal :
Applied Soft Computing
Publication Type :
Academic Journal
Accession number :
148867130
Full Text :
https://doi.org/10.1016/j.asoc.2020.107003