1. Benchmarking procedures for characterizing the extent of rater agreement: a comparative study
- Author
-
Amalia Vanacore, Maria Sole Pellegrino, Vanacore, A., and Pellegrino, M. S.
- Subjects
benchmarking procedure ,Monte Carlo method ,κ$kappa$-type agreement coefficients ,misclassification rate ,Benchmarking ,Statistical physics ,Management Science and Operations Research ,Safety, Risk, Reliability and Quality ,Monte Carlo simulation ,Mathematics - Abstract
Decision making processes often rely on subjective evaluations provided by human raters. In the absence of a gold standard against which check evaluation trueness, rater's evaluative performance is generally measured through rater agreement coefficients. In this study some parametric and non-parametric inferential benchmarking procedures for characterizing the extent of rater agreement—assessed via kappa-type agreement coefficients—are illustrated. A Monte Carlo simulation study has been conducted to compare the performance of each procedure in terms of weighted misclassification rate computed for all agreement categories. Moreover, in order to investigate whether the procedures overestimate or underestimate the level of agreement, misclassifications have been computed also for each specific category alone. The practical application of coefficients and inferential benchmarking procedures has been illustrated via two real data sets exemplifying different experimental conditions so as to highlight performance differences due to sample size.
- Published
- 2021
- Full Text
- View/download PDF