Back to Search Start Over

An open-source fine-tuned large language model for radiological impression generation: a multi-reader performance study.

Authors :
Serapio, Adrian
Chaudhari, Gunvant
Savage, Cody
Lee, Yoo Jin
Vella, Maya
Sridhar, Shravan
Schroeder, Jamie Lee
Liu, Jonathan
Yala, Adam
Sohn, Jae Ho
Source :
BMC Medical Imaging; 9/27/2024, Vol. 24 Issue 1, p1-14, 14p
Publication Year :
2024

Abstract

Background: The impression section integrates key findings of a radiology report but can be subjective and variable. We sought to fine-tune and evaluate an open-source Large Language Model (LLM) in automatically generating impressions from the remainder of a radiology report across different imaging modalities and hospitals. Methods: In this institutional review board-approved retrospective study, we collated a dataset of CT, US, and MRI radiology reports from the University of California San Francisco Medical Center (UCSFMC) (n = 372,716) and the Zuckerberg San Francisco General (ZSFG) Hospital and Trauma Center (n = 60,049), both under a single institution. The Recall-Oriented Understudy for Gisting Evaluation (ROUGE) score, an automatic natural language evaluation metric that measures word overlap, was used for automatic natural language evaluation. A reader study with five cardiothoracic radiologists was performed to more strictly evaluate the model's performance on a specific modality (CT chest exams) with a radiologist subspecialist baseline. We stratified the results of the reader performance study based on the diagnosis category and the original impression length to gauge case complexity. Results: The LLM achieved ROUGE-L scores of 46.51, 44.2, and 50.96 on UCSFMC and upon external validation, ROUGE-L scores of 40.74, 37.89, and 24.61 on ZSFG across the CT, US, and MRI modalities respectively, implying a substantial degree of overlap between the model-generated impressions and impressions written by the subspecialist attending radiologists, but with a degree of degradation upon external validation. In our reader study, the model-generated impressions achieved overall mean scores of 3.56/4, 3.92/4, 3.37/4, 18.29 s,12.32 words, and 84 while the original impression written by a subspecialist radiologist achieved overall mean scores of 3.75/4, 3.87/4, 3.54/4, 12.2 s, 5.74 words, and 89 for clinical accuracy, grammatical accuracy, stylistic quality, edit time, edit distance, and ROUGE-L score respectively. The LLM achieved the highest clinical accuracy ratings for acute/emergent findings and on shorter impressions. Conclusions: An open-source fine-tuned LLM can generate impressions to a satisfactory level of clinical accuracy, grammatical accuracy, and stylistic quality. Our reader performance study demonstrates the potential of large language models in drafting radiology report impressions that can aid in streamlining radiologists' workflows. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14712342
Volume :
24
Issue :
1
Database :
Complementary Index
Journal :
BMC Medical Imaging
Publication Type :
Academic Journal
Accession number :
179968417
Full Text :
https://doi.org/10.1186/s12880-024-01435-w