Back to Search Start Over

The effects of natural language processing on cross-institutional portability of influenza case detection for disease surveillance.

Authors :
Ferraro JP
Ye Y
Gesteland PH
Haug PJ
Tsui FR
Cooper GF
Van Bree R
Ginter T
Nowalk AJ
Wagner M
Source :
Applied clinical informatics [Appl Clin Inform] 2017 May 31; Vol. 8 (2), pp. 560-580. Date of Electronic Publication: 2017 May 31.
Publication Year :
2017

Abstract

Objectives: This study evaluates the accuracy and portability of a natural language processing (NLP) tool for extracting clinical findings of influenza from clinical notes across two large healthcare systems. Effectiveness is evaluated on how well NLP supports downstream influenza case-detection for disease surveillance.<br />Methods: We independently developed two NLP parsers, one at Intermountain Healthcare (IH) in Utah and the other at University of Pittsburgh Medical Center (UPMC) using local clinical notes from emergency department (ED) encounters of influenza. We measured NLP parser performance for the presence and absence of 70 clinical findings indicative of influenza. We then developed Bayesian network models from NLP processed reports and tested their ability to discriminate among cases of (1) influenza, (2) non-influenza influenza-like illness (NI-ILI), and (3) 'other' diagnosis.<br />Results: On Intermountain Healthcare reports, recall and precision of the IH NLP parser were 0.71 and 0.75, respectively, and UPMC NLP parser, 0.67 and 0.79. On University of Pittsburgh Medical Center reports, recall and precision of the UPMC NLP parser were 0.73 and 0.80, respectively, and IH NLP parser, 0.53 and 0.80. Bayesian case-detection performance measured by AUROC for influenza versus non-influenza on Intermountain Healthcare cases was 0.93 (using IH NLP parser) and 0.93 (using UPMC NLP parser). Case-detection on University of Pittsburgh Medical Center cases was 0.95 (using UPMC NLP parser) and 0.83 (using IH NLP parser). For influenza versus NI-ILI on Intermountain Healthcare cases performance was 0.70 (using IH NLP parser) and 0.76 (using UPMC NLP parser). On University of Pisstburgh Medical Center cases, 0.76 (using UPMC NLP parser) and 0.65 (using IH NLP parser).<br />Conclusion: In all but one instance (influenza versus NI-ILI using IH cases), local parsers were more effective at supporting case-detection although performances of non-local parsers were reasonable.

Details

Language :
English
ISSN :
1869-0327
Volume :
8
Issue :
2
Database :
MEDLINE
Journal :
Applied clinical informatics
Publication Type :
Academic Journal
Accession number :
28561130
Full Text :
https://doi.org/10.4338/ACI-2016-12-RA-0211