Back to Search Start Over

Measurement and control of bias in patient reported outcomes using multidimensional item response theory

Authors :
N. Maritza Dowling
Daniel M. Bolt
Sien Deng
Chenxi Li
Source :
BMC Medical Research Methodology, Vol 16, Iss 1, Pp 1-12 (2016)
Publication Year :
2016
Publisher :
BMC, 2016.

Abstract

Abstract Background Patient-reported outcome (PRO) measures play a key role in the advancement of patient-centered care research. The accuracy of inferences, relevance of predictions, and the true nature of the associations made with PRO data depend on the validity of these measures. Errors inherent to self-report measures can seriously bias the estimation of constructs assessed by the scale. A well-documented disadvantage of self-report measures is their sensitivity to response style (RS) effects such as the respondent’s tendency to select the extremes of a rating scale. Although the biasing effect of extreme responding on constructs measured by self-reported tools has been widely acknowledged and studied across disciplines, little attention has been given to the development and systematic application of methodologies to assess and control for this effect in PRO measures. Methods We review the methodological approaches that have been proposed to study extreme RS effects (ERS). We applied a multidimensional item response theory model to simultaneously estimate and correct for the impact of ERS on trait estimation in a PRO instrument. Model estimates were used to study the biasing effects of ERS on sum scores for individuals with the same amount of the targeted trait but different levels of ERS. We evaluated the effect of joint estimation of multiple scales and ERS on trait estimates and demonstrated the biasing effects of ERS on these trait estimates when used as explanatory variables. Results A four-dimensional model accounting for ERS bias provided a better fit to the response data. Increasing levels of ERS showed bias in total scores as a function of trait estimates. The effect of ERS was greater when the pattern of extreme responding was the same across multiple scales modeled jointly. The estimated item category intercepts provided evidence of content independent category selection. Uncorrected trait estimates used as explanatory variables in prediction models showed downward bias. Conclusions A comprehensive evaluation of the psychometric quality and soundness of PRO assessment measures should incorporate the study of ERS as a potential nuisance dimension affecting the accuracy and validity of scores and the impact of PRO data in clinical research and decision making.

Details

Language :
English
ISSN :
14712288
Volume :
16
Issue :
1
Database :
Directory of Open Access Journals
Journal :
BMC Medical Research Methodology
Publication Type :
Academic Journal
Accession number :
edsdoj.fba84748b114e3dac6b3c75ad5a8908
Document Type :
article
Full Text :
https://doi.org/10.1186/s12874-016-0161-z