1. Evaluating ChatGPT-4 in medical education: an assessment of subject exam performance reveals limitations in clinical curriculum support for students
- Author
-
Brendan P. Mackey, Razmig Garabet, Laura Maule, Abay Tadesse, James Cross, and Michael Weingarten
- Subjects
Artificial intelligence ,Machine learning ,Medical education ,Chatgpt-4 ,Usmle step 2 ,Nbme shelf ,Computational linguistics. Natural language processing ,P98-98.5 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract This study evaluates the proficiency of ChatGPT-4 across various medical specialties and assesses its potential as a study tool for medical students preparing for the United States Medical Licensing Examination (USMLE) Step 2 and related clinical subject exams. ChatGPT-4 answered board-level questions with 89% accuracy, but showcased significant discrepancies in performance across specialties. Although it excelled in psychiatry, neurology, and obstetrics and gynecology, it underperformed in pediatrics, emergency medicine, and family medicine. These variations may be potentially attributed to the depth and recency of training data as well as the scope of the specialties assessed. Specialties with significant interdisciplinary overlap had lower performance, suggesting complex clinical scenarios pose a challenge to the AI. In terms of the future, the overall efficacy of ChatGPT-4 indicates a promising supplemental role in medical education, but performance inconsistencies across specialties in the current version lead us to recommend that medical students use AI with caution.
- Published
- 2024
- Full Text
- View/download PDF