Back to Search
Start Over
Performance of ChatGPT on Chinese Master's Degree Entrance Examination in Clinical Medicine.
- Source :
-
PloS one [PLoS One] 2024 Apr 04; Vol. 19 (4), pp. e0301702. Date of Electronic Publication: 2024 Apr 04 (Print Publication: 2024). - Publication Year :
- 2024
-
Abstract
- Background: ChatGPT is a large language model designed to generate responses based on a contextual understanding of user queries and requests. This study utilised the entrance examination for the Master of Clinical Medicine in Traditional Chinese Medicine to assesses the reliability and practicality of ChatGPT within the domain of medical education.<br />Methods: We selected 330 single and multiple-choice questions from the 2021 and 2022 Chinese Master of Clinical Medicine comprehensive examinations, which did not include any images or tables. To ensure the test's accuracy and authenticity, we preserved the original format of the query and alternative test texts, without any modifications or explanations.<br />Results: Both ChatGPT3.5 and GPT-4 attained average scores surpassing the admission threshold. Noteworthy is that ChatGPT achieved the highest score in the Medical Humanities section, boasting a correct rate of 93.75%. However, it is worth noting that ChatGPT3.5 exhibited the lowest accuracy percentage of 37.5% in the Pathology division, while GPT-4 also displayed a relatively lower correctness percentage of 60.23% in the Biochemistry section. An analysis of sub-questions revealed that ChatGPT demonstrates superior performance in handling single-choice questions but performs poorly in multiple-choice questions.<br />Conclusion: ChatGPT exhibits a degree of medical knowledge and the capacity to aid in diagnosing and treating diseases. Nevertheless, enhancements are warranted to address its accuracy and reliability limitations. Imperatively, rigorous evaluation and oversight must accompany its utilization, accompanied by proactive measures to surmount prevailing constraints.<br />Competing Interests: The authors have declared that no competing interests exist.<br /> (Copyright: © 2024 Li et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Subjects :
- Humans
Reproducibility of Results
Asian People
Language
Clinical Medicine
Medicine
Subjects
Details
- Language :
- English
- ISSN :
- 1932-6203
- Volume :
- 19
- Issue :
- 4
- Database :
- MEDLINE
- Journal :
- PloS one
- Publication Type :
- Academic Journal
- Accession number :
- 38573944
- Full Text :
- https://doi.org/10.1371/journal.pone.0301702