Back to Search
Start Over
Evaluating the interactions of Medical Doctors with chatbots based on large language models: Insights from a nationwide study in the Greek healthcare sector using ChatGPT.
- Source :
-
Computers in Human Behavior . Dec2024, Vol. 161, pN.PAG-N.PAG. 1p. - Publication Year :
- 2024
-
Abstract
- In this AI-focused era, researchers are delving into AI applications in healthcare, with ChatGPT being a primary focus. This Greek study involved 182 doctors from various regions, utilizing a custom web application connected to ChatGPT 4.0. Doctors from diverse departments and experience levels engaged with ChatGPT, which provided tailored responses. Over a month, data was collected using a form with a 1-to-5 rating scale. The results showed varying satisfaction levels across four criteria: clarity, response time, accuracy, and overall satisfaction. ChatGPT's response speed received high ratings (3.85/5.0), whereas clarity of information was moderately rated (3.43/5.0). A significant observation was the correlation between a doctor's experience and their satisfaction with ChatGPT. More experienced doctors (over 21 years) reported lower satisfaction (2.80–3.74/5.0) compared to their less experienced counterparts (3.43–4.20/5.0). At the medical field level, Internal Medicine showed higher satisfaction in evaluation criteria (ranging from 3.56 to 3.88), compared to other fields, while Psychiatry scored higher overall, with ratings from 3.63 to 5.00. The study also compared two departments: Urology and Internal Medicine, with the latter being more satisfied with the accuracy, and clarity of provided information, response time, and overall compared to Urology. These findings illuminate the specific needs of the health sector and highlight both the potential and areas for improvement in ChatGPT's provision of specialized medical information. Despite current limitations, ChatGPT, in its present version, offers a valuable resource to the medical community, signaling further advancements and potential integration into healthcare practices. • Greek doctors evaluate ChatGPT 4.0 nationwide via a custom web app. • ChatGPT earns high marks for speed; clarity scores are moderate. • More experienced doctors report lower satisfaction. • Doctors in Internal Medicine and Psychiatry show higher satisfaction than those in Surgery or Special Treatment Units. • Findings underscore ChatGPT's potential and improvement areas in healthcare. [ABSTRACT FROM AUTHOR]
- Subjects :
- *GENERATIVE artificial intelligence
*UROLOGY
*SATISFACTION
*PSYCHIATRY
*ATTITUDES toward computers
*HEALTH
*PHYSICIANS' attitudes
*WORK experience (Employment)
*INFORMATION resources
*OPERATIVE surgery
*INTERNAL medicine
*MACHINE learning
*HEALTH care industry
*QUALITY assurance
*USER interfaces
*ACCESS to information
Subjects
Details
- Language :
- English
- ISSN :
- 07475632
- Volume :
- 161
- Database :
- Academic Search Index
- Journal :
- Computers in Human Behavior
- Publication Type :
- Academic Journal
- Accession number :
- 179793917
- Full Text :
- https://doi.org/10.1016/j.chb.2024.108404