Back to Search Start Over

Assessing the Accuracy, Completeness, and Reliability of Artificial Intelligence-Generated Responses in Dentistry: A Pilot Study Evaluating the ChatGPT Model.

Authors :
Molena KF
Macedo AP
Ijaz A
Carvalho FK
Gallo MJD
Wanderley Garcia de Paula E Silva F
de Rossi A
Mezzomo LA
Mugayar LRF
Queiroz AM
Source :
Cureus [Cureus] 2024 Jul 29; Vol. 16 (7), pp. e65658. Date of Electronic Publication: 2024 Jul 29 (Print Publication: 2024).
Publication Year :
2024

Abstract

Background: Artificial intelligence (AI) can be a tool in the diagnosis and acquisition of knowledge, particularly in dentistry, sparking debates on its application in clinical decision-making.<br />Objective: This study aims to evaluate the accuracy, completeness, and reliability of the responses generated by Chatbot Generative Pre-Trained Transformer (ChatGPT) 3.5 in dentistry using expert-formulated questions.<br />Materials and Methods: Experts were invited to create three questions, answers, and respective references according to specialized fields of activity. The Likert scale was used to evaluate agreement levels between experts and ChatGPT responses. Statistical analysis compared descriptive and binary question groups in terms of accuracy and completeness. Questions with low accuracy underwent re-evaluation, and subsequent responses were compared for improvement. The Wilcoxon test was utilized (α = 0.05).<br />Results: Ten experts across six dental specialties generated 30 binary and descriptive dental questions and references. The accuracy score had a median of 5.50 and a mean of 4.17. For completeness, the median was 2.00 and the mean was 2.07. No difference was observed between descriptive and binary responses for accuracy and completeness. However, re-evaluated responses showed a significant improvement with a significant difference in accuracy (median 5.50 vs. 6.00; mean 4.17 vs. 4.80; p=0.042) and completeness (median 2.0 vs. 2.0; mean 2.07 vs. 2.30; p=0.011). References were more incorrect than correct, with no differences between descriptive and binary questions.<br />Conclusions: ChatGPT initially demonstrated good accuracy and completeness, which was further improved with machine learning (ML) over time. However, some inaccurate answers and references persisted. Human critical discernment continues to be essential to facing complex clinical cases and advancing theoretical knowledge and evidence-based practice.<br />Competing Interests: Human subjects: Consent was obtained or waived by all participants in this study. Institutional Research Ethics Committee issued approval 69712923.6.0000.5419. "The research project was approved". Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.<br /> (Copyright © 2024, Molena et al.)

Details

Language :
English
ISSN :
2168-8184
Volume :
16
Issue :
7
Database :
MEDLINE
Journal :
Cureus
Publication Type :
Academic Journal
Accession number :
39205730
Full Text :
https://doi.org/10.7759/cureus.65658