1. Screening mammography performance according to breast density: a comparison between radiologists versus standalone intelligence detection.
- Author
-
Kwon MR, Chang Y, Ham SY, Cho Y, Kim EY, Kang J, Park EK, Kim KH, Kim M, Kim TS, Lee H, Kwon R, Lim GY, Choi HR, Choi J, Kook SH, and Ryu S
- Subjects
- Humans, Female, Adult, Middle Aged, Retrospective Studies, Republic of Korea epidemiology, ROC Curve, Breast diagnostic imaging, Breast pathology, Algorithms, Mass Screening methods, Sensitivity and Specificity, Breast Neoplasms diagnostic imaging, Breast Neoplasms diagnosis, Breast Neoplasms pathology, Breast Neoplasms epidemiology, Mammography methods, Breast Density, Radiologists, Early Detection of Cancer methods, Artificial Intelligence
- Abstract
Background: Artificial intelligence (AI) algorithms for the independent assessment of screening mammograms have not been well established in a large screening cohort of Asian women. We compared the performance of screening digital mammography considering breast density, between radiologists and AI standalone detection among Korean women., Methods: We retrospectively included 89,855 Korean women who underwent their initial screening digital mammography from 2009 to 2020. Breast cancer within 12 months of the screening mammography was the reference standard, according to the National Cancer Registry. Lunit software was used to determine the probability of malignancy scores, with a cutoff of 10% for breast cancer detection. The AI's performance was compared with that of the final Breast Imaging Reporting and Data System category, as recorded by breast radiologists. Breast density was classified into four categories (A-D) based on the radiologist and AI-based assessments. The performance metrics (cancer detection rate [CDR], sensitivity, specificity, positive predictive value [PPV], recall rate, and area under the receiver operating characteristic curve [AUC]) were compared across breast density categories., Results: Mean participant age was 43.5 ± 8.7 years; 143 breast cancer cases were identified within 12 months. The CDRs (1.1/1000 examination) and sensitivity values showed no significant differences between radiologist and AI-based results (69.9% [95% confidence interval [CI], 61.7-77.3] vs. 67.1% [95% CI, 58.8-74.8]). However, the AI algorithm showed better specificity (93.0% [95% CI, 92.9-93.2] vs. 77.6% [95% CI, 61.7-77.9]), PPV (1.5% [95% CI, 1.2-1.9] vs. 0.5% [95% CI, 0.4-0.6]), recall rate (7.1% [95% CI, 6.9-7.2] vs. 22.5% [95% CI, 22.2-22.7]), and AUC values (0.8 [95% CI, 0.76-0.84] vs. 0.74 [95% CI, 0.7-0.78]) (all P < 0.05). Radiologist and AI-based results showed the best performance in the non-dense category; the CDR and sensitivity were higher for radiologists in the heterogeneously dense category (P = 0.059). However, the specificity, PPV, and recall rate consistently favored AI-based results across all categories, including the extremely dense category., Conclusions: AI-based software showed slightly lower sensitivity, although the difference was not statistically significant. However, it outperformed radiologists in recall rate, specificity, PPV, and AUC, with disparities most prominent in extremely dense breast tissue., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF