Back to Search Start Over

Gradient learning in a classification setting by gradient descent

Authors :
Cai, Jia
Wang, Hongyan
Zhou, Ding-Xuan
Source :
Journal of Approximation Theory. Dec2009, Vol. 161 Issue 2, p674-692. 19p.
Publication Year :
2009

Abstract

Abstract: Learning gradients is one approach for variable selection and feature covariation estimation when dealing with large data of many variables or coordinates. In a classification setting involving a convex loss function, a possible algorithm for gradient learning is implemented by solving convex quadratic programming optimization problems induced by regularization schemes in reproducing kernel Hilbert spaces. The complexity for such an algorithm might be very high when the number of variables or samples is huge. We introduce a gradient descent algorithm for gradient learning in classification. The implementation of this algorithm is simple and its convergence is elegantly studied. Explicit learning rates are presented in terms of the regularization parameter and the step size. Deep analysis for approximation by reproducing kernel Hilbert spaces under some mild conditions on the probability measure for sampling allows us to deal with a general class of convex loss functions. [Copyright &y& Elsevier]

Details

Language :
English
ISSN :
00219045
Volume :
161
Issue :
2
Database :
Academic Search Index
Journal :
Journal of Approximation Theory
Publication Type :
Academic Journal
Accession number :
45202362
Full Text :
https://doi.org/10.1016/j.jat.2008.12.002