1. Exploring the Comparability of Multiple-Choice and Constructed-Response Versions of Scenario-Based Assessment Tasks
- Author
-
Herrmann-Abell, Cari F., Hardcastle, Joseph, and DeBoer, George E.
- Abstract
As implementation of the "Next Generation Science Standards" moves forward, there is a need for new assessments that can measure students' integrated three-dimensional science learning. The National Research Council has suggested that these assessments be multicomponent tasks that utilize a combination of item formats including constructed-response and multiple-choice. However, little guidance has been provided for determining the relative value or cost effectiveness of those two formats. In this study, students were randomly assigned assessment tasks that contained either a constructed-response or a multiple-choice version of an otherwise equivalent item. Rasch analysis was used to compare the difficulty of these items on the same construct scale. We found that the set of items formed a broad unidimensional scale, but the constructed-response versions were more difficult than their multiple-choice counterparts. This difficulty was found to be partially due to the increased writing demand and the reasoning element in the constructed-response rubric. Students were more likely to recognize a clearly reasoned argument in a multiple-choice item than they were to create that reasoning themselves and communicate it in writing. Our findings can help instrument developers select a set of items that balances the time and effort students must provide during testing and the time and effort scorers need to spend to evaluate and score students' responses. In cases where constructing a response is an essential part of the targeted understanding, as when the target learning goal is to be able to construct an argument or generate a model, CR items are needed, but in other cases, MC items may be more efficient.
- Published
- 2022