Back to Search Start Over

Explaining the unsuitability of the kappa coefficient in the assessment and comparison of the accuracy of thematic maps obtained by image classification.

Authors :
Foody, Giles M.
Source :
Remote Sensing of Environment. Mar2020, Vol. 239, pN.PAG-N.PAG. 1p.
Publication Year :
2020

Abstract

The kappa coefficient is not an index of accuracy, indeed it is not an index of overall agreement but one of agreement beyond chance. Chance agreement is, however, irrelevant in an accuracy assessment and is anyway inappropriately modelled in the calculation of a kappa coefficient for typical remote sensing applications. The magnitude of a kappa coefficient is also difficult to interpret. Values that span the full range of widely used interpretation scales, indicating a level of agreement that equates to that estimated to arise from chance alone all the way through to almost perfect agreement, can be obtained from classifications that satisfy demanding accuracy targets (e.g. for a classification with overall accuracy of 95% the range of possible values of the kappa coefficient is −0.026 to 0.900). Comparisons of kappa coefficients are particularly challenging if the classes vary in their abundance (i.e. prevalence) as the magnitude of a kappa coefficient reflects not only agreement in labelling but also properties of the populations under study. It is shown that all of the arguments put forward for the use of the kappa coefficient in accuracy assessment are flawed and/or irrelevant as they apply equally to other, sometimes easier to calculate, measures of accuracy. Calls for the kappa coefficient to be abandoned from accuracy assessments should finally be heeded and researchers are encouraged to provide a set of simple measures and associated outputs such as estimates of per-class accuracy and the confusion matrix when assessing and comparing classification accuracy. • Kappa is not a measure of accuracy but of agreement beyond chance and chance correction is not needed. • All arguments for the use of kappa are flawed or apply equally to other measures. • Interpreting a kappa coefficient is difficult due especially to the effects of prevalence and bias. • A very accurate classification could be associated with a very wide range of kappa values. • Kappa should not be routinely used in accuracy assessment or comparison. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00344257
Volume :
239
Database :
Academic Search Index
Journal :
Remote Sensing of Environment
Publication Type :
Academic Journal
Accession number :
141638750
Full Text :
https://doi.org/10.1016/j.rse.2019.111630