Back to Search Start Over

Assessment Study of ChatGPT-3.5's Performance on the Final Polish Medical Examination: Accuracy in Answering 980 Questions.

Authors :
Siebielec, Julia
Ordak, Michal
Oskroba, Agata
Dworakowska, Anna
Bujalska-Zadrozny, Magdalena
Source :
Healthcare (2227-9032); Aug2024, Vol. 12 Issue 16, p1637, 13p
Publication Year :
2024

Abstract

Background/Objectives: The use of artificial intelligence (AI) in education is dynamically growing, and models such as ChatGPT show potential in enhancing medical education. In Poland, to obtain a medical diploma, candidates must pass the Medical Final Examination, which consists of 200 questions with one correct answer per question, is administered in Polish, and assesses students' comprehensive medical knowledge and readiness for clinical practice. The aim of this study was to determine how ChatGPT-3.5 handles questions included in this exam. Methods: This study considered 980 questions from five examination sessions of the Medical Final Examination conducted by the Medical Examination Center in the years 2022–2024. The analysis included the field of medicine, the difficulty index of the questions, and their type, namely theoretical versus case-study questions. Results: The average correct answer rate achieved by ChatGPT for the five examination sessions hovered around 60% and was lower (p < 0.001) than the average score achieved by the examinees. The lowest percentage of correct answers was in hematology (42.1%), while the highest was in endocrinology (78.6%). The difficulty index of the questions showed a statistically significant correlation with the correctness of the answers (p = 0.04). Questions for which ChatGPT-3.5 provided incorrect answers had a lower (p < 0.001) percentage of correct responses. The type of questions analyzed did not significantly affect the correctness of the answers (p = 0.46). Conclusions: This study indicates that ChatGPT-3.5 can be an effective tool for assisting in passing the final medical exam, but the results should be interpreted cautiously. It is recommended to further verify the correctness of the answers using various AI tools. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
22279032
Volume :
12
Issue :
16
Database :
Complementary Index
Journal :
Healthcare (2227-9032)
Publication Type :
Academic Journal
Accession number :
179382374
Full Text :
https://doi.org/10.3390/healthcare12161637