1. eOSCE stations live versus remote evaluation and scores variability
- Author
-
Bouzid, Donia, Mullaert, Jimmy, Ghazali, Aiham, Ferré, Valentine Marie, Mentré, France, Lemogne, Cédric, Ruszniewski, Philippe, Faye, Albert, Dinh, Alexy Tran, Mirault, Tristan, Smadja, Nathan Peiffer, Muller, Léonore, Pierrotin, Laure Falque, Thy, Michael, Assadi, Maksud, Yung, Sonia, de Tymowski, Christian, Le Hingrat, Quentin, Eyer, Xavier, Wicky, Paul Henri, Oualha, Mehdi, Houdouin, Véronique, Jabre, Patricia, Vodovar, Dominique, Burgio, Marco Dioguardi, Zucman, Noémie, Tsopra, Rosy, Tazi, Asmaa, Ressaire, Quentin, Nguyen, Yann, Girard, Muriel, Frachon, Adèle, Depret, François, Pellat, Anna, de Masson, Adèle, Azais, Henri, de Castro, Nathalie, Jeantrelle, Caroline, Javaud, Nicolas, Malmartel, Alexandre, de Margerie, Constance Jacquin, Chousterman, Benjamin, Fournel, Ludovic, Holleville, Mathilde, Blanche, Stéphane, Infection, Anti-microbiens, Modélisation, Evolution (IAME (UMR_S_1137 / U1137)), Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Paris Cité (UPCité)-Université Sorbonne Paris Nord, Institut de psychiatrie et neurosciences de Paris (IPNP - U1266 Inserm), Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Paris Cité (UPCité), Hôpital Beaujon [AP-HP], Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP), AP-HP Hôpital universitaire Robert-Debré [Paris], CIC Hôpital Bichat, AP-HP - Hôpital Bichat - Claude Bernard [Paris], Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-UFR de Médecine, Hôpital Européen Georges Pompidou [APHP] (HEGP), Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Hôpitaux Universitaires Paris Ouest - Hôpitaux Universitaires Île de France Ouest (HUPO), Health data- and model- driven Knowledge Acquisition (HeKA), Inria de Paris, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche des Cordeliers (CRC (UMR_S_1138 / U1138)), École Pratique des Hautes Études (EPHE), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Sorbonne Université (SU)-Université Paris Cité (UPCité)-École Pratique des Hautes Études (EPHE), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Sorbonne Université (SU)-Université Paris Cité (UPCité), and Centre de Recherche des Cordeliers (CRC (UMR_S_1138 / U1138))
- Subjects
[SHS.EDU]Humanities and Social Sciences/Education ,General Medicine ,Education - Abstract
Background Objective structured clinical examinations (OSCEs) are known to be a fair evaluation method. These recent years, the use of online OSCEs (eOSCEs) has spread. This study aimed to compare remote versus live evaluation and assess the factors associated with score variability during eOSCEs. Methods We conducted large-scale eOSCEs at the medical school of the Université de Paris Cité in June 2021 and recorded all the students’ performances, allowing a second evaluation. To assess the agreement in our context of multiple raters and students, we fitted a linear mixed model with student and rater as random effects and the score as an explained variable. Results One hundred seventy observations were analyzed for the first station after quality control. We retained 192 and 110 observations for the statistical analysis of the two other stations. The median score and interquartile range were 60 out of 100 (IQR 50–70), 60 out of 100 (IQR 54–70), and 53 out of 100 (IQR 45–62) for the three stations. The score variance proportions explained by the rater (ICC rater) were 23.0, 16.8, and 32.8%, respectively. Of the 31 raters, 18 (58%) were male. Scores did not differ significantly according to the gender of the rater (p = 0.96, 0.10, and 0.26, respectively). The two evaluations showed no systematic difference in scores (p = 0.92, 0.053, and 0.38, respectively). Conclusion Our study suggests that remote evaluation is as reliable as live evaluation for eOSCEs.
- Published
- 2022