Back to Search Start Over

An Improved Extreme Learning Machine with Parallelized Feature Mapping Structures

Authors :
Alan Wee-Chung Liew
Lihua Guo
Source :
DICTA
Publication Year :
2016
Publisher :
IEEE, 2016.

Abstract

Compared with deep neural network which is trained using back propagation, the extreme learning machine (ELM) learns thousands of times faster but still produces good generalization performance. To better understand the ELM, this paper studies the effect of noise on the input nodes or hidden neurons. It was found that there is no effect on the performance of ELM when small amount of noise is added to the input or the neurons in the hidden layer. Although the performance of ELM would improve with an increase in the number of neurons in the hidden layer, beyond a certain limit, this could lead to overfitting. In view of this, a parallel ELM (P-ELM) is proposed to improve the system performance. P-ELM has better robustness to noise due to the ensemble nature and is less susceptible to overfitting since each parallel hidden layer has only a moderate number of hidden neurons. Experimental results have indicated that the proposed P-ELM can achieve better classification performance than ELM without large increase in training time.

Details

Database :
OpenAIRE
Journal :
2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
Accession number :
edsair.doi...........971e99769def3114c2ede079ecde1165