Back to Search
Start Over
Test-time bi-directional adaptation between image and model for robust segmentation.
- Source :
-
Computer Methods & Programs in Biomedicine . May2023, Vol. 233, pN.PAG-N.PAG. 1p. - Publication Year :
- 2023
-
Abstract
- • An effective test-time bi-directional adaptation strategy is proposed to seek robust segmentation. • A window-based order statistics alignment module is presented to adapt appearance-agnostic test images to existing learned models. • An augmented self-supervised learning is developed to adapt the segmentation model to images with unknown appearance shifts. • The method generalizes well across multi-vendor/center datasets. Deep learning models often suffer from performance degradations when deployed in real clinical environments due to appearance shifts between training and testing images. Most extant methods use training-time adaptation , which almost require target domain samples in the training phase. However, these solutions are limited by the training process and cannot guarantee the accurate prediction of test samples with unforeseen appearance shifts. Further, it is impractical to collect target samples in advance. In this paper, we provide a general method of making existing segmentation models robust to samples with unknown appearance shifts when deployed in daily clinical practice. Our proposed test-time bi-directional adaptation framework combines two complementary strategies. First, our image-to-model (I2M) adaptation strategy adapts appearance-agnostic test images to the learned segmentation model using a novel plug-and-play statistical alignment style transfer module during testing. Second, our model-to-image (M2I) adaptation strategy adapts the learned segmentation model to test images with unknown appearance shifts. This strategy applies an augmented self-supervised learning module to fine-tune the learned model with proxy labels that it generates. This innovative procedure can be adaptively constrained using our novel proxy consistency criterion. This complementary I2M and M2I framework demonstrably achieves robust segmentation against unknown appearance shifts using existing deep-learning models. Extensive experiments on 10 datasets containing fetal ultrasound, chest X-ray, and retinal fundus images demonstrate that our proposed method achieves promising robustness and efficiency in segmenting images with unknown appearance shifts. To address the appearance shift problem in clinically acquired medical images, we provide robust segmentation by using two complementary strategies. Our solution is general and amenable for deployment in clinical settings. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 01692607
- Volume :
- 233
- Database :
- Academic Search Index
- Journal :
- Computer Methods & Programs in Biomedicine
- Publication Type :
- Academic Journal
- Accession number :
- 162937043
- Full Text :
- https://doi.org/10.1016/j.cmpb.2023.107477