51. Analysis of Responses of GPT-4 V to the Japanese National Clinical Engineer Licensing Examination.
- Author
-
Ishida, Kai, Arisaka, Naoya, and Fujii, Kiyotaka
- Subjects
- *
GENERATIVE artificial intelligence , *EDUCATION , *DATA analysis , *ENGINEERING , *EDUCATIONAL tests & measurements , *DESCRIPTIVE statistics , *BIOMEDICAL engineering , *COMMUNICATION , *STATISTICS , *ARTIFICIAL blood circulation , *DATA analysis software , *USER interfaces , *MEDICAL equipment safety measures - Abstract
Chat Generative Pretrained Transformer (ChatGPT; OpenAI) is a state-of-the-art large language model that can simulate human-like conversations based on user input. We evaluated the performance of GPT-4 V in the Japanese National Clinical Engineer Licensing Examination using 2,155 questions from 2012 to 2023. The average correct answer rate for all questions was 86.0%. In particular, clinical medicine, basic medicine, medical materials, biological properties, and mechanical engineering achieved a correct response rate of ≥ 90%. Conversely, medical device safety management, electrical and electronic engineering, and extracorporeal circulation obtained low correct answer rates ranging from 64.8% to 76.5%. The correct answer rates for questions that included figures/tables, required numerical calculation, figure/table ∩ calculation, and knowledge of Japanese Industrial Standards were 55.2%, 85.8%, 64.2% and 31.0%, respectively. The reason for the low correct answer rates is that ChatGPT lacked recognition of the images and knowledge of standards and laws. This study concludes that careful attention is required when using ChatGPT because several of its explanations lack the correct description. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF