Back to Search
Start Over
Expert evaluation of large language models for clinical dialogue summarization.
- Source :
-
Scientific reports [Sci Rep] 2025 Jan 07; Vol. 15 (1), pp. 1195. Date of Electronic Publication: 2025 Jan 07. - Publication Year :
- 2025
-
Abstract
- We assessed the performance of large language models' summarizing clinical dialogues using computational metrics and human evaluations. The comparison was done between automatically generated and human-produced summaries. We conducted an exploratory evaluation of five language models: one general summarisation model, one fine-tuned for general dialogues, two fine-tuned with anonymized clinical dialogues, and one Large Language Model (ChatGPT). These models were assessed using ROUGE, UniEval metrics, and expert human evaluation was done by clinicians comparing the generated summaries against a clinician generated summary (gold standard). The fine-tuned transformer model scored the highest when evaluated with ROUGE, while ChatGPT scored the lowest overall. However, using UniEval, ChatGPT scored the highest across all the evaluated domains (coherence 0.957, consistency 0.7583, fluency 0.947, and relevance 0.947 and overall score 0.9891). Similar results were obtained when the systems were evaluated by clinicians, with ChatGPT scoring the highest in four domains (coherency 0.573, consistency 0.908, fluency 0.96 and overall clinical use 0.862). Statistical analyses showed differences between ChatGPT and human summaries vs. all other models. These exploratory results indicate that ChatGPT's performance in summarizing clinical dialogues approached the quality of human summaries. The study also found that the ROUGE metrics may not be reliable for evaluating clinical summary generation, whereas UniEval correlated well with human ratings. Large language models may provide a successful path for automating clinical dialogue summarization. Privacy concerns and the restricted nature of health records remain challenges for its integration. Further evaluations using diverse clinical dialogues and multiple initialization seeds are needed to verify the reliability and generalizability of automatically generated summaries.<br />Competing Interests: Declarations. Ethics approval and consent to participate: Human Ethics and Consent to Participate declarations: not applicable. Competing interests: The authors declare no competing interests. Original data collection Ethics Approval available at: Kocaballi AB, Coiera E, Tong HL, White SJ, Quiroz JC, Rezazadegan F, Willcock S, Laranjo L (2019) A network model of activities in primary care consultations. Journal of the American Medical Informatics Association 26:1074–1082.<br /> (© 2025. The Author(s).)
- Subjects :
- Humans
Natural Language Processing
Electronic Health Records
Language
Subjects
Details
- Language :
- English
- ISSN :
- 2045-2322
- Volume :
- 15
- Issue :
- 1
- Database :
- MEDLINE
- Journal :
- Scientific reports
- Publication Type :
- Academic Journal
- Accession number :
- 39774141
- Full Text :
- https://doi.org/10.1038/s41598-024-84850-x