Back to Search Start Over

Segmentation of Clinical Target Volume From CT Images for Cervical Cancer Using Deep Learning.

Authors :
Huang, Mingxu
Feng, Chaolu
Sun, Deyu
Cui, Ming
Zhao, Dazhe
Source :
Technology in Cancer Research & Treatment; 1/4/2023, p1-8, 8p
Publication Year :
2023

Abstract

Introduction: Segmentation of clinical target volume (CTV) from CT images is critical for cervical cancer brachytherapy, but this task is time-consuming, laborious, and not reproducible. In this work, we aim to propose an end-to-end model to segment CTV for cervical cancer brachytherapy accurately. Methods: In this paper, an improved M-Net model (Mnet_IM) is proposed to segment CTV of cervical cancer from CT images. An input and an output branch are both proposed to attach to the bottom layer to deal with CTV locating challenges due to its lower contrast than surrounding organs and tissues. A progressive fusion approach is then proposed to recover the prediction results layer by layer to enhance the smoothness of segmentation results. A loss function is defined on each of the multiscale outputs to form a deep supervision mechanism. Numbers of feature map channels that are directly connected to inputs are finally homogenized for each image resolution to reduce feature redundancy and computational burden. Result: Experimental results of the proposed model and some representative models on 5438 image slices from 53 cervical cancer patients demonstrate advantages of the proposed model in terms of segmentation accuracy, such as average surface distance, 95% Hausdorff distance, surface overlap, surface dice, and volumetric dice. Conclusion: A better agreement between the predicted CTV from the proposed model Mnet_IM and manually labeled ground truth is obtained compared to some representative state-of-the-art models. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15330346
Database :
Complementary Index
Journal :
Technology in Cancer Research & Treatment
Publication Type :
Academic Journal
Accession number :
161195448
Full Text :
https://doi.org/10.1177/15330338221139164