Back to Search Start Over

Performance of ChatGPT on Nephrology Test Questions.

Authors :
Miao J
Thongprayoon C
Garcia Valencia OA
Krisanapan P
Sheikh MS
Davis PW
Mekraksakit P
Suarez MG
Craici IM
Cheungpasitporn W
Source :
Clinical journal of the American Society of Nephrology : CJASN [Clin J Am Soc Nephrol] 2024 Jan 01; Vol. 19 (1), pp. 35-43. Date of Electronic Publication: 2023 Oct 18.
Publication Year :
2024

Abstract

Background: ChatGPT is a novel tool that allows people to engage in conversations with an advanced machine learning model. ChatGPT's performance in the US Medical Licensing Examination is comparable with a successful candidate's performance. However, its performance in the nephrology field remains undetermined. This study assessed ChatGPT's capabilities in answering nephrology test questions.<br />Methods: Questions sourced from Nephrology Self-Assessment Program and Kidney Self-Assessment Program were used, each with multiple-choice single-answer questions. Questions containing visual elements were excluded. Each question bank was run twice using GPT-3.5 and GPT-4. Total accuracy rate, defined as the percentage of correct answers obtained by ChatGPT in either the first or second run, and the total concordance, defined as the percentage of identical answers provided by ChatGPT during both runs, regardless of their correctness, were used to assess its performance.<br />Results: A comprehensive assessment was conducted on a set of 975 questions, comprising 508 questions from Nephrology Self-Assessment Program and 467 from Kidney Self-Assessment Program. GPT-3.5 resulted in a total accuracy rate of 51%. Notably, the employment of Nephrology Self-Assessment Program yielded a higher accuracy rate compared with Kidney Self-Assessment Program (58% versus 44%; P < 0.001). The total concordance rate across all questions was 78%, with correct answers exhibiting a higher concordance rate (84%) compared with incorrect answers (73%) ( P < 0.001). When examining various nephrology subfields, the total accuracy rates were relatively lower in electrolyte and acid-base disorder, glomerular disease, and kidney-related bone and stone disorders. The total accuracy rate of GPT-4's response was 74%, higher than GPT-3.5 ( P < 0.001) but remained below the passing threshold and average scores of nephrology examinees (77%).<br />Conclusions: ChatGPT exhibited limitations regarding accuracy and repeatability when addressing nephrology-related questions. Variations in performance were evident across various subfields.<br /> (Copyright © 2023 by the American Society of Nephrology.)

Details

Language :
English
ISSN :
1555-905X
Volume :
19
Issue :
1
Database :
MEDLINE
Journal :
Clinical journal of the American Society of Nephrology : CJASN
Publication Type :
Academic Journal
Accession number :
37851468
Full Text :
https://doi.org/10.2215/CJN.0000000000000330