Back to Search
Start Over
Examining the Performance of the Trifactor Model for Multiple Raters.
- Source :
-
Applied psychological measurement [Appl Psychol Meas] 2022 Jan; Vol. 46 (1), pp. 53-67. Date of Electronic Publication: 2021 Dec 07. - Publication Year :
- 2022
-
Abstract
- Researchers in the social sciences often obtain ratings of a construct of interest provided by multiple raters. While using multiple raters provides a way to help avoid the subjectivity of any given person's responses, rater disagreement can be a problem. A variety of models exist to address rater disagreement in both structural equation modeling and item response theory frameworks. Recently, a model was developed by Bauer et al. (2013) and referred to as the "trifactor model" to provide applied researchers with a straightforward way of estimating scores that are purged of variance that is idiosyncratic by rater. Although the intent of the model is to be usable and interpretable, little is known about the circumstances under which it performs well, and those it does not. We conduct simulation studies to examine the performance of the trifactor model under a range of sample sizes and model specifications and then compare model fit, bias, and convergence rates.<br />Competing Interests: Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.<br /> (© The Author(s) 2021.)
Details
- Language :
- English
- ISSN :
- 1552-3497
- Volume :
- 46
- Issue :
- 1
- Database :
- MEDLINE
- Journal :
- Applied psychological measurement
- Publication Type :
- Academic Journal
- Accession number :
- 34898747
- Full Text :
- https://doi.org/10.1177/01466216211051728