Back to Search Start Over

Identifying Inconsistent Respondents to Mixed-Worded Scales in Large-Scale Assessments

Authors :
Steinmann, Isa
Braeken, Johan
Strietholt, Rolf
Source :
AERA Online Paper Repository. 2021.
Publication Year :
2021

Abstract

This study investigates consistent and inconsistent respondents to mixed-worded questionnaire scales in large-scale assessments. Mixed-worded scales contain both positively and negatively worded items and are universally applied in different survey and content areas. Due to the changing wording, these scales require a more careful reading and answering process than scales with only one type of wording (Marsh, 1986; Schmitt & Stults, 1985). Especially poor readers might not notice the changing item wording (Marsh, 1986). Therefore, using mixed-worded scales can have unintended consequences, because not all respondents answer positively and negatively worded items in a consistent way. This study assumes and aims to identify two distinct groups of respondents to mixed-worded scales, consistent and inconsistent respondents. We argue that this population heterogeneity underlies the common phenomenon of wording-related effects in mixed-worded scales (Gnambs & Schroeders, 2017; Marsh, 1986). We investigated five datasets from three large-scale assessments. At first we included n = 4,799 15-year-old students from the USA who were surveyed in PISA (Programme for International Student Assessment) 2015, second n = 5,943 fourth-graders from Australia who participated in both TIMSS (Trends in International Mathematics and Science Study) and PIRLS (Progress in International Reading Literacy Study) 2011, and third n = 4,989 fifth- and n = 4,791 ninth-graders from Germany who participated in NEPS (National Educational Panel Study) in 2010/2011 and 2014/2015. The mixed-worded scales measured the reading self-concept in PISA and PIRLS, the mathematics self-concept in TIMSS, and the global self-esteem in NEPS. In order to identify two unobserved groups of respondents to the different mixed-worded scales, we formulated a constrained factor mixture model (e.g., Masyn et al., 2010) that operationalized these two assumed classes of respondents. We modeled the consistent class to show a response pattern that implies changing the side of the response scale (i.e. agree with positively worded items and disagree with negatively worded items or vice versa) and the inconsistent class to show the same response pattern to both item types (i.e. agreeing or disagreeing to all items). The findings of this study have different implications for the use of mixed-worded questionnaire scales in large-scale assessments as well as for future research in the field of interactions between survey instruments and respondents. The study further connects two strands of previously unrelated research, research on the detection of inconsistent/careless respondents and research on the reasons for unexpected item intercorrelation patterns in mixed-worded scales. In all five datasets, the estimated parameter patterns were in line with theoretical expectations and the mixture models consistently outperformed more traditional two-dimensional confirmatory factor analysis models. Between 7% and 20% of respondents were found to belong to the inconsistent classes. To further substantiate and validate the interpretation of the proposed model, class membership was related to a theoretically relevant characteristic of the respondents, the reading achievement. Conform with expectations, the reading achievement scores were lower in the classes of inconsistent respondents than in the classes of consistent respondents in all five datasets.

Details

Language :
English
Database :
ERIC
Journal :
AERA Online Paper Repository
Publication Type :
Conference
Accession number :
ED627351
Document Type :
Speeches/Meeting Papers<br />Reports - Research