1. Deep-learning model accurately classifies multi-label lung ultrasound findings, enhancing diagnostic accuracy and inter-reader agreement
- Author
-
Daeeon Hong, Hyewon Choi, Wonju Hong, Yisak Kim, Tae Jung Kim, Jinwook Choi, Sang-Bae Ko, and Chang Min Park
- Subjects
Artificial intelligence ,Ultrasonography ,Thorax ,Classification ,Medicine ,Science - Abstract
Abstract Despite the increasing use of lung ultrasound (LUS) in the evaluation of respiratory disease, operators’ competence constrains its effectiveness. We developed a deep-learning (DL) model for multi-label classification using LUS and validated its performance and efficacy on inter-reader variability. We retrospectively collected LUS and labeled as normal, B-line, consolidation, and effusion from patients undergoing thoracentesis at a tertiary institution between January 2018 and January 2022. The development and internal testing involved 7580 images from January 2018 and December 2020, and the model’s performance was validated on a temporally separated test set (n = 985 images collected after January 2021) and two external test sets (n = 319 and 54 images). Two radiologists interpreted LUS with and without DL assistance and compared diagnostic performance and agreement. The model demonstrated robust performance with AUCs: 0.93 (95% CI 0.92–0.94) for normal, 0.87 (95% CI 0.84–0.89) for B-line, 0.82 (95% CI 0.78–0.86) for consolidation, and 0.94 (95% CI 0.93–0.95) for effusion. The model improved reader accuracy for binary discrimination (normal vs. abnormal; reader 1: 87.5–95.6%, p = 0.004; reader 2: 95.0–97.5%, p = 0.19), and agreement (k = 0.73–0.83, p = 0.01). In conclusion, the DL-based model may assist interpretation, improving accuracy and overcoming operator competence limitations in LUS.
- Published
- 2024
- Full Text
- View/download PDF