Back to Search Start Over

Development and Validation of a Model to Identify Critical Brain Injuries Using Natural Language Processing of Text Computed Tomography Reports

Authors :
Victor M, Torres-Lopez
Grace E, Rovenolt
Angelo J, Olcese
Gabriella E, Garcia
Sarah M, Chacko
Amber, Robinson
Edward, Gaiser
Julian, Acosta
Alison L, Herman
Lindsey R, Kuohn
Megan, Leary
Alexandria L, Soto
Qiang, Zhang
Safoora, Fatima
Guido J, Falcone
M Seyedmehdi, Payabvash
Richa, Sharma
Aaron F, Struck
Kevin N, Sheth
M Brandon, Westover
Jennifer A, Kim
Source :
JAMA Network Open. 5:e2227109
Publication Year :
2022
Publisher :
American Medical Association (AMA), 2022.

Abstract

ImportanceClinical text reports from head computed tomography (CT) represent rich, incompletely utilized information regarding acute brain injuries and neurologic outcomes. CT reports are unstructured; thus, extracting information at scale requires automated natural language processing (NLP). However, designing new NLP algorithms for each individual injury category is an unwieldy proposition. An NLP tool that summarizes all injuries in head CT reports would facilitate exploration of large data sets for clinical significance of neuroradiological findings.ObjectiveTo automatically extract acute brain pathological data and their features from head CT reports.Design, Setting, and ParticipantsThis diagnostic study developed a 2-part named entity recognition (NER) NLP model to extract and summarize data on acute brain injuries from head CT reports. The model, termed BrainNERD, extracts and summarizes detailed brain injury information for research applications. Model development included building and comparing 2 NER models using a custom dictionary of terms, including lesion type, location, size, and age, then designing a rule-based decoder using NER outputs to evaluate for the presence or absence of injury subtypes. BrainNERD was evaluated against independent test data sets of manually classified reports, including 2 external validation sets. The model was trained on head CT reports from 1152 patients generated by neuroradiologists at the Yale Acute Brain Injury Biorepository. External validation was conducted using reports from 2 outside institutions. Analyses were conducted from May 2020 to December 2021.Main Outcomes and MeasuresPerformance of the BrainNERD model was evaluated using precision, recall, and F1 scores based on manually labeled independent test data sets.ResultsA total of 1152 patients (mean [SD] age, 67.6 [16.1] years; 586 [52%] men), were included in the training set. NER training using transformer architecture and bidirectional encoder representations from transformers was significantly faster than spaCy. For all metrics, the 10-fold cross-validation performance was 93% to 99%. The final test performance metrics for the NER test data set were 98.82% (95% CI, 98.37%-98.93%) for precision, 98.81% (95% CI, 98.46%-99.06%) for recall, and 98.81% (95% CI, 98.40%-98.94%) for the F score. The expert review comparison metrics were 99.06% (95% CI, 97.89%-99.13%) for precision, 98.10% (95% CI, 97.93%-98.77%) for recall, and 98.57% (95% CI, 97.78%-99.10%) for the F score. The decoder test set metrics were 96.06% (95% CI, 95.01%-97.16%) for precision, 96.42% (95% CI, 94.50%-97.87%) for recall, and 96.18% (95% CI, 95.151%-97.16%) for the F score. Performance in external institution report validation including 1053 head CR reports was greater than 96%.Conclusions and RelevanceThese findings suggest that the BrainNERD model accurately extracted acute brain injury terms and their properties from head CT text reports. This freely available new tool could advance clinical research by integrating information in easily gathered head CT reports to expand knowledge of acute brain injury radiographic phenotypes.

Details

ISSN :
25743805
Volume :
5
Database :
OpenAIRE
Journal :
JAMA Network Open
Accession number :
edsair.doi.dedup.....9eb53e4b30a1ab1cbd3306935eb0cfc8