1. Stacking for Misclassification Cost Performance
- Author
-
Cameron-Jones, RM, Charman-Williams, A, Cameron-Jones, RM, and Charman-Williams, A
- Abstract
This paper investigates the application of the multiple classifier technique known as "stacking" [23], to the task of classifier learning for misclassification cost performance, by straightforwardly adapting a technique successfully developed by Ting and Witten [19, 20] for the task of classifier learning for accuracy performance. Experiments are reported comparing the performance of the stacked classifier with that of its component classifiers, and of other proposed cost-sensitive multiple classifier methods -- a variation of "bagging", and two "boosting" style methods. These experiments confirm that stacking is competitive with the other methods that have previously been proposed. Some further experiments examine the performance of stacking methods with different numbers of component classifiers, including the case of stacking a single classifier, and provide the first demonstration that stacking a single classifier can be beneficial for many data sets.