Back to Search
Start Over
Learning noisy linear classifiers via adaptive and selective sampling.
- Source :
- Machine Learning; Apr2011, Vol. 83 Issue 1, p71-102, 32p
- Publication Year :
- 2011
-
Abstract
- We introduce efficient margin-based algorithms for selective sampling and filtering in binary classification tasks. Experiments on real-world textual data reveal that our algorithms perform significantly better than popular and similarly efficient competitors. Using the so-called Mammen-Tsybakov low noise condition to parametrize the instance distribution, and assuming linear label noise, we show bounds on the convergence rate to the Bayes risk of a weaker adaptive variant of our selective sampler. Our analysis reveals that, excluding logarithmic factors, the average risk of this adaptive sampler converges to the Bayes risk at rate N where N denotes the number of queried labels, and α>0 is the exponent in the low noise condition. For all $\alpha>\sqrt{3}-1\approx0.73$ this convergence rate is asymptotically faster than the rate N achieved by the fully supervised version of the base selective sampler, which queries all labels. Moreover, for α→∞ (hard margin condition) the gap between the semi- and fully-supervised rates becomes exponential. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 08856125
- Volume :
- 83
- Issue :
- 1
- Database :
- Complementary Index
- Journal :
- Machine Learning
- Publication Type :
- Academic Journal
- Accession number :
- 59524123
- Full Text :
- https://doi.org/10.1007/s10994-010-5191-x