Back to Search
Start Over
Statistical Comparisons of Classifiers over Multiple Data Sets.
- Source :
-
Journal of Machine Learning Research . 1/1/2006, Vol. 7 Issue 1, p1-30. 30p. 7 Charts, 7 Graphs. - Publication Year :
- 2006
-
Abstract
- While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams. [ABSTRACT FROM AUTHOR]
- Subjects :
- *ALGORITHMS
*STATISTICS
*MACHINE learning
*ROBUST control
*MATHEMATICS
Subjects
Details
- Language :
- English
- ISSN :
- 15324435
- Volume :
- 7
- Issue :
- 1
- Database :
- Academic Search Index
- Journal :
- Journal of Machine Learning Research
- Publication Type :
- Academic Journal
- Accession number :
- 20018430