Back to Search Start Over

A modified gradient learning algorithm with smoothing L 1/2 regularization for Takagi–Sugeno fuzzy models.

Authors :
Liu, Yan
Wu, Wei
Fan, Qinwei
Yang, Dakun
Wang, Jian
Source :
Neurocomputing. Aug2014, Vol. 138, p229-237. 9p.
Publication Year :
2014

Abstract

Abstract: A popular and feasible approach to determine the appropriate size of a neural network is to remove unnecessary connections from an oversized network. The advantage of L 1/2 regularization has been recognized for sparse modeling. However, the nonsmoothness of L 1/2 regularization may lead to oscillation phenomenon. An approach with smoothing L 1/2 regularization is proposed in this paper for Takagi–Sugeno (T–S) fuzzy models, in order to improve the learning efficiency and to promote sparsity of the models. The new smoothing L 1/2 regularizer removes the oscillation. Besides, it also enables us to prove the weak and strong convergence results for T–S fuzzy neural networks with zero-order. Furthermore, a relationship between the learning rate parameter and the penalty parameter is given to guarantee the convergence. Simulation results are provided to support the theoretical findings, and they show the superiority of the smoothing L 1/2 regularization over the original L 1/2 regularization. [Copyright &y& Elsevier]

Details

Language :
English
ISSN :
09252312
Volume :
138
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
95930547
Full Text :
https://doi.org/10.1016/j.neucom.2014.01.041