Back to Search
Start Over
Online Learning Algorithms Can Converge Comparably Fast as Batch Learning.
- Source :
-
IEEE Transactions on Neural Networks & Learning Systems . Jun2018, Vol. 29 Issue 6, p2367-2378. 12p. - Publication Year :
- 2018
-
Abstract
- Online learning algorithms in a reproducing kernel Hilbert space associated with convex loss functions are studied. We show that in terms of the expected excess generalization error, they can converge comparably fast as corresponding kernel-based batch learning algorithms. Under mild conditions on loss functions and approximation errors, fast learning rates and finite sample upper bounds are established using polynomially decreasing step-size sequences. For some commonly used loss functions for classification, such as the logistic and the p -norm hinge loss functions with p\in [{1,2}] , the learning rates are the same as those for Tikhonov regularization and can be of order O(T^{- {(1 / 2)}} \log T)$ , which are nearly optimal up to a logarithmic factor. Our novelty lies in a sharp estimate for the expected values of norms of the learning sequence (or an inductive argument to uniformly bound the expected risks of the learning sequence in expectation) and a refined error decomposition for online learning algorithms. [ABSTRACT FROM AUTHOR]
- Subjects :
- *MACHINE learning
*MATHEMATICAL programming
*ARTIFICIAL neural networks
Subjects
Details
- Language :
- English
- ISSN :
- 2162237X
- Volume :
- 29
- Issue :
- 6
- Database :
- Academic Search Index
- Journal :
- IEEE Transactions on Neural Networks & Learning Systems
- Publication Type :
- Periodical
- Accession number :
- 129655398
- Full Text :
- https://doi.org/10.1109/TNNLS.2017.2677970