Back to Search
Start Over
Large language models in periodontology: Assessing their performance in clinically relevant questions.
- Source :
-
The Journal of prosthetic dentistry [J Prosthet Dent] 2024 Nov 18. Date of Electronic Publication: 2024 Nov 18. - Publication Year :
- 2024
- Publisher :
- Ahead of Print
-
Abstract
- Statement of Problem: Although the use of artificial intelligence (AI) seems promising and may assist dentists in clinical practice, the consequences of inaccurate or even harmful responses are paramount. Research is required to examine whether large language models (LLMs) can be used in accessing periodontal content reliably.<br />Purpose: The purpose of this study was to evaluate and compare the evidence-based potential of answers provided by 4 LLMs to common clinical questions in the field of periodontology.<br />Material and Methods: A total of 10 open-ended questions pertinent to periodontology were posed to 4 distinct LLMs: ChatGPT model GPT 4.0, Google Gemini, Google Gemini Advanced, and Microsoft Copilot. The answers to each question were evaluated independently by 2 periodontists against robust scientific evidence based on a predefined rubric assessing the comprehensiveness, scientific accuracy, clarity, and relevance. Each response received a score ranging from 0 (minimum) to 10 (maximum). After a period of 2 weeks from initial evaluation, the answers were re-graded independently to gauge intra-evaluator reliability. Inter-evaluator reliability was assessed using correlation tests, while Cronbach alpha and interclass correlation coefficient were used to measure overall reliability. The Kruskal-Wallis test was employed to compare the scores given by different LLMs.<br />Results: The scores provided by the 2 evaluators for both evaluations were statistically similar (P values ranging from .083 to >;.999), therefore an average score was calculated for each LLM. Both evaluators gave the highest scores to the answers generated by ChatGPT 4.0, while Google Gemini had the lowest scores. ChatGPT 4.0 received the highest average score, while significant differences were detected between ChatGPT 4.0 and Google Gemini (P=.042). ChatGPT 4.0 answers were found to be highly comprehensive, with scientific accuracy, clarity, and relevance.<br />Conclusions: Professionals need to be aware of the limitations of LLMs when utilizing them. These models must not replace dental professionals as improper use may negatively impact patient care. Chat GPT 4.0, Google Gemini, Google Gemini Advanced, and Microsoft CoPilot performed relatively well with Chat GPT 4.0 demonstrating the highest performance.<br /> (Copyright © 2024 Editorial Council for The Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.)
Details
- Language :
- English
- ISSN :
- 1097-6841
- Database :
- MEDLINE
- Journal :
- The Journal of prosthetic dentistry
- Publication Type :
- Academic Journal
- Accession number :
- 39562221
- Full Text :
- https://doi.org/10.1016/j.prosdent.2024.10.020