1. Semi‐supervised training using cooperative labeling of weakly annotated data for nodule detection in chest CT.
- Author
-
Maynord, Michael, Farhangi, M. Mehdi, Fermüller, Cornelia, Aloimonos, Yiannis, Levine, Gary, Petrick, Nicholas, Sahiner, Berkman, and Pezeshk, Aria
- Subjects
- *
COMPUTED tomography , *SUPERVISED learning , *MACHINE learning , *FALSE positive error , *COMPUTER-aided diagnosis , *COMPUTER-assisted image analysis (Medicine) , *IMAGE analysis , *DIAGNOSTIC imaging - Abstract
Purpose: Machine learning algorithms are best trained with large quantities of accurately annotated samples. While natural scene images can often be labeled relatively cheaply and at large scale, obtaining accurate annotations for medical images is both time consuming and expensive. In this study, we propose a cooperative labeling method that allows us to make use of weakly annotated medical imaging data for the training of a machine learning algorithm. As most clinically produced data are weakly‐annotated – produced for use by humans rather than machines and lacking information machine learning depends upon – this approach allows us to incorporate a wider range of clinical data and thereby increase the training set size. Methods: Our pseudo‐labeling method consists of multiple stages. In the first stage, a previously established network is trained using a limited number of samples with high‐quality expert‐produced annotations. This network is used to generate annotations for a separate larger dataset that contains only weakly annotated scans. In the second stage, by cross‐checking the two types of annotations against each other, we obtain higher‐fidelity annotations. In the third stage, we extract training data from the weakly annotated scans, and combine it with the fully annotated data, producing a larger training dataset. We use this larger dataset to develop a computer‐aided detection (CADe) system for nodule detection in chest CT. Results: We evaluated the proposed approach by presenting the network with different numbers of expert‐annotated scans in training and then testing the CADe using an independent expert‐annotated dataset. We demonstrate that when availability of expert annotations is severely limited, the inclusion of weakly‐labeled data leads to a 5% improvement in the competitive performance metric (CPM), defined as the average of sensitivities at different false‐positive rates. Conclusions: Our proposed approach can effectively merge a weakly‐annotated dataset with a small, well‐annotated dataset for algorithm training. This approach can help enlarge limited training data by leveraging the large amount of weakly labeled data typically generated in clinical image interpretation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF