1. Scoring and Consequential Validity Evidence of Computer- and Paper-Based Writing Tests in Times of Change.
- Author
-
Guapacha-Chamorro, María and Chaves-Varón, Orlando
- Subjects
ELECTRONIC records ,LECTURERS ,METHODOLOGY ,ENGLISH as a foreign language ,COMPREHENSION - Abstract
Little is known about how the assessment modality, i. e., computer-based (CB) and paper-based (PB) tests, affects language teachers' scorings, perceptions, and preferences and, therefore, the validity and fairness of classroom writing assessments. The present mixed-methods study used Shaw and Weir's (2007) sociocognitive writing test validation framework to examine the scoring and consequential validity evidence of CB and PB writing tests in EFL classroom assessment in higher education. Original handwritten and word-processed texts of 38 EFL university students were transcribed to their opposite format and assessed by three language lecturers (N = 456 texts, 152 per teacher) to examine the scoring validity of CB and PB tests. The teachers' perceptions of text quality and preferences for assessment modality accounted for the consequential validity evidence of both tests. Findings revealed that the assessment modality impacted teachers' scorings, perceptions, and preferences. The teachers awarded higher scores to original and transcribed handwritten texts, particularly text organization and language use. The teachers' perceptions of text quality differed from their ratings, and physical, psychological, and experiential characteristics influenced their preferences for assessment modality. The results have implications for the validity and fairness of CB and PB writing tests and teachers' assessment practices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF