Back to Search Start Over

Which Pixel to Annotate: A Label-Efficient Nuclei Segmentation Framework.

Authors :
Lou W
Li H
Li G
Han X
Wan X
Source :
IEEE transactions on medical imaging [IEEE Trans Med Imaging] 2023 Apr; Vol. 42 (4), pp. 947-958. Date of Electronic Publication: 2023 Apr 03.
Publication Year :
2023

Abstract

Recently deep neural networks, which require a large amount of annotated samples, have been widely applied in nuclei instance segmentation of H&E stained pathology images. However, it is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns. Although unsupervised and semi-supervised learning methods have been studied for nuclei segmentation, very few works have delved into the selective labeling of samples to reduce the workload of annotation. Thus, in this paper, we propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner. In the proposed framework, we first develop a novel consistency-based patch selection method to determine which image patches are the most beneficial to the training. Then we introduce a conditional single-image GAN with a component-wise discriminator, to synthesize more training samples. Lastly, our proposed framework trains an existing segmentation model with the above augmented samples. The experimental results show that our proposed method could obtain the same-level performance as a fully-supervised baseline by annotating less than 5% pixels on some benchmarks.

Details

Language :
English
ISSN :
1558-254X
Volume :
42
Issue :
4
Database :
MEDLINE
Journal :
IEEE transactions on medical imaging
Publication Type :
Academic Journal
Accession number :
36355729
Full Text :
https://doi.org/10.1109/TMI.2022.3221666