Back to Search
Start Over
Examining the Precision of Cut Scores within a Generalizability Theory Framework: A Closer Look at the Item Effect
- Source :
-
Journal of Educational Measurement . Sum 2020 57(2):216-229. - Publication Year :
- 2020
-
Abstract
- An Angoff standard setting study generally yields judgments on a number of items by a number of judges (who may or may not be nested in panels). Variability associated with judges (and possibly panels) contributes error to the resulting cut score. The variability associated with items plays a more complicated role. To the extent that the mean item judgments directly reflect empirical item difficulties, the variability in Angoff judgments over items would not add error to the cut score, but to the extent that the mean item judgments do not correspond to the empirical item difficulties, variability in mean judgments over items would add error to the cut score. In this article, we present two generalizability-theory-based analyses of the proportion of the item variance that contributes to error in the cut score. For one approach, variance components are estimated on the probability (or proportion-correct) scale of the Angoff judgments, and for the other, the judgments are transferred to the theta scale of an item response theory model before estimating the variance components. The two analyses yield somewhat different results but both indicate that it is not appropriate to simply ignore the item variance component in estimating the error variance.
Details
- Language :
- English
- ISSN :
- 0022-0655
- Volume :
- 57
- Issue :
- 2
- Database :
- ERIC
- Journal :
- Journal of Educational Measurement
- Publication Type :
- Academic Journal
- Accession number :
- EJ1255534
- Document Type :
- Journal Articles<br />Reports - Evaluative
- Full Text :
- https://doi.org/10.1111/jedm.12247