1. Analysis of Boundedness and Convergence of Online Gradient Method for Two-Layer Feedforward Neural Networks.
- Author
-
Lu Xu, Jinshu Chen, Defeng Huang, Jianhua Lu, and Licai Fang
- Subjects
STOCHASTIC convergence ,FEEDFORWARD neural networks ,DIFFERENCE equations ,MATHEMATICAL bounds ,PARAMETER estimation - Abstract
This paper presents a theoretical boundedness and convergence analysis of online gradient method for the training of two-layer feedforward neural networks. The well-known linear difference equation is extended to apply to the general case of linear or nonlinear activation functions. Based on this extended difference equation, we investigate the boundedness and convergence of the parameter sequence of concern, which is trained by finite training samples with a constant learning rate. We show that the uniform upper bound of the parameter sequence, which is very important in the training procedure, is the solution of an inequality regarding the bound. It is further verified that, for the case of linear activation function, a solution always exists and, moreover, the parameter sequence can be uniformly upper bounded, while for the case of nonlinear activation function, some simple adjustment methods on the training set or the activation function can be derived to improve the boundedness property. Then, for the convergence analysis, it is shown that the parameter sequence can converge into a zone around an optimal solution at which the error function attains its global minimum, where the size of the zone is associated with the learning rate. Particularly, for the case of perfect modeling, a strong global convergence result, where the parameter sequence can always converge to an optimal solution, is proved. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF