Back to Search Start Over

Pre-Consultation System Based on the Artificial Intelligence Has a Better Diagnostic Performance Than the Physicians in the Outpatient Department of Pediatrics

Authors :
Han Qian
Bin Dong
Jia-jun Yuan
Fan Yin
Zhao Wang
Hai-ning Wang
Han-song Wang
Dan Tian
Wei-hua Li
Bin Zhang
Lie-bin Zhao
Bo-tao Ning
Source :
Frontiers in Medicine, Vol 8 (2021)
Publication Year :
2021
Publisher :
Frontiers Media S.A., 2021.

Abstract

Artificial intelligence (AI) has been deeply applied in the medical field and has shown broad application prospects. Pre-consultation system is an important supplement to the traditional face-to-face consultation. The combination of the AI and the pre-consultation system can help to raise the efficiency of the clinical work. However, it is still challenging for the AI to analyze and process the complicated electronic health record (EHR) data. Our pre-consultation system uses an automated natural language processing (NLP) system to communicate with the patients through the mobile terminals, applying the deep learning (DL) techniques to extract the symptomatic information, and finally outputs the structured electronic medical records. From November 2019 to May 2020, a total of 2,648 pediatric patients used our model to provide their medical history and get the primary diagnosis before visiting the physicians in the outpatient department of the Shanghai Children's Medical Center. Our task is to evaluate the ability of the AI and doctors to obtain the primary diagnosis and to analyze the effect of the consistency between the medical history described by our model and the physicians on the diagnostic performance. The results showed that if we do not consider whether the medical history recorded by the AI and doctors was consistent or not, our model performed worse compared to the physicians and had a lower average F1 score (0.825 vs. 0.912). However, when the chief complaint or the history of present illness described by the AI and doctors was consistent, our model had a higher average F1 score and was closer to the doctors. Finally, when the AI had the same diagnostic conditions with doctors, our model achieved a higher average F1 score (0.931) compared to the physicians (0.92). This study demonstrated that our model could obtain a more structured medical history and had a good diagnostic logic, which would help to improve the diagnostic accuracy of the outpatient doctors and reduce the misdiagnosis and missed diagnosis. But, our model still needs a good deal of training to obtain more accurate symptomatic information.

Details

Language :
English
ISSN :
2296858X
Volume :
8
Database :
Directory of Open Access Journals
Journal :
Frontiers in Medicine
Publication Type :
Academic Journal
Accession number :
edsdoj.f94313959c2240faaa3a633229e4a3ba
Document Type :
article
Full Text :
https://doi.org/10.3389/fmed.2021.695185