Back to Search Start Over

LL-ELM: A regularized extreme learning machine based on L1-norm and Liu estimator.

Authors :
Yıldırım, Hasan
Revan Özkale, M.
Source :
Neural Computing & Applications; Aug2021, Vol. 33 Issue 16, p10469-10484, 16p
Publication Year :
2021

Abstract

In this paper, we proposed a novel regularization and variable selection algorithm called Liu–Lasso extreme learning machine (LL-ELM) in order to deal with the ELM's drawbacks like instability, poor generalizability and underfitting or overfitting due to the selection of inappropriate hidden layer nodes. Liu estimator, which is a statistically biased estimator, is considered in the learning phase of the proposed algorithm with Lasso regression approach. The proposed algorithm is compared with the conventional ELM and its variants including ELM forms based on Liu estimator (Liu-ELM), L 1 -norm (Lasso-ELM), L 2 -norm (Ridge-ELM) and elastic net (Enet-ELM). Convenient selection methods for the determination of tuning parameters for each algorithm have been used in comparisons. The results show that there always exists a d value such that LL-ELM overperforms either Lasso-ELM or Enet-ELM in terms of learning and generalization performance. This improvement in LL-Lasso over Lasso-ELM and Enet-ELM in the sense of testing root mean square error varies up to 27 % depending on the proposed d selection methods. Consequently, LL-ELM can be considered as a competitive algorithm for both regressional and classification tasks because of easy integrability property. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09410643
Volume :
33
Issue :
16
Database :
Complementary Index
Journal :
Neural Computing & Applications
Publication Type :
Academic Journal
Accession number :
151304582
Full Text :
https://doi.org/10.1007/s00521-021-05806-0