Back to Search Start Over

Quantifying error in OSCE standard setting for varying cohort sizes: A resampling approach to measuring assessment quality.

Authors :
Homer, Matt
Pell, Godfrey
Fuller, Richard
Patterson, John
Source :
Medical Teacher. Feb2016, Vol. 38 Issue 2, p181-188. 8p.
Publication Year :
2016

Abstract

Background:The use of the borderline regression method (BRM) is a widely accepted standard setting method for OSCEs. However, it is unclear whether this method is appropriate for use with small cohorts (e.g. specialist post-graduate examinations). Aims and methods:This work uses an innovative application of resampling methods applied to four pre-existing OSCE data sets (number of stations between 17 and 21) from two institutions to investigate how the robustness of the BRM changes as the cohort size varies. Using a variety of metrics, the ‘quality’ of an OSCE is evaluated for cohorts of approximatelyn = 300 down ton = 15. Estimates of the standard error in station-level and overall pass marks,R2coefficient, and Cronbach’s alpha are all calculated as cohort size varies. Results and conclusion: For larger cohorts (n > 200), the standard error in the overall pass mark is small (less than 0.5%), and for individual stations is of the order of 1–2%. These errors grow as the sample size reduces, with cohorts of less than 50 candidates showing unacceptably large standard error. Alpha andR2also become unstable for small cohorts. The resampling methodology is shown to be robust and has the potential to be more widely applied in standard setting and medical assessment quality assurance and research. [ABSTRACT FROM PUBLISHER]

Details

Language :
English
ISSN :
0142159X
Volume :
38
Issue :
2
Database :
Academic Search Index
Journal :
Medical Teacher
Publication Type :
Academic Journal
Accession number :
112902419
Full Text :
https://doi.org/10.3109/0142159X.2015.1029898