Back to Search Start Over

Convergence of online gradient method for feedforward neural networks with smoothing L 1/2 regularization penalty.

Authors :
Fan, Qinwei
Zurada, Jacek M.
Wu, Wei
Source :
Neurocomputing. May2014, Vol. 131, p208-216. 9p.
Publication Year :
2014

Abstract

Abstract: Minimization of the training regularization term has been recognized as an important objective for sparse modeling and generalization in feedforward neural networks. Most of the studies so far have been focused on the popular L 2 regularization penalty. In this paper, we consider the convergence of online gradient method with smoothing regularization term. For normal regularization, the objective function is the sum of a non-convex, non-smooth, and non-Lipschitz function, which causes oscillation of the error function and the norm of gradient. However, using the smoothing approximation techniques, the deficiency of the normal regularization term can be addressed. This paper shows the strong convergence results for the smoothing regularization. Furthermore, we prove the boundedness of the weights during the network training. The assumption that weights are bounded is no longer needed for the proof of convergence. Simulation results support the theoretical findings and demonstrate that our algorithm has better performance than two other algorithms with L 2 and normal regularizations respectively. [Copyright &y& Elsevier]

Details

Language :
English
ISSN :
09252312
Volume :
131
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
94575791
Full Text :
https://doi.org/10.1016/j.neucom.2013.10.023