Back to Search Start Over

Determining the Scoring Validity of a Co-Constructed CEFR-Based Rating Scale

Authors :
Deygers, Bart
Van Gorp, Koen
Source :
Language Testing. Oct 2015 32(4):521-541.
Publication Year :
2015

Abstract

Considering scoring validity as encompassing both reliable rating scale use and valid descriptor interpretation, this study reports on the validation of a CEFR-based scale that was co-constructed and used by novice raters. The research questions this paper wishes to answer are (a) whether it is possible to construct a CEFR-based rating scale with novice raters that yields reliable ratings and (b) allows for a uniform interpretation of the descriptors. Additionally, this study focuses on the question whether co-constructing a rating scale with novice raters helps to stimulate a shared interpretation of the descriptors over time. For this study, six novice raters employed a CEFR-based scale that had been co-constructed by themselves and 14 peers to rate 200 spoken and written performances in a missing data design. The quantitative data were analysed using item response theory, classical test theory and principal component analysis. The focus group data, collected after the rating process, were transcribed and coded using both a priori and inductive coding. The results indicate that novice raters can reliably use the CEFR-based rating scale, but that the interpretations of the descriptors, in spite of training and co-construction, are not as homogeneous as the inter-rater reliability would suggest.

Details

Language :
English
ISSN :
0265-5322
Volume :
32
Issue :
4
Database :
ERIC
Journal :
Language Testing
Publication Type :
Academic Journal
Accession number :
EJ1081138
Document Type :
Journal Articles<br />Reports - Research
Full Text :
https://doi.org/10.1177/0265532215575626