Back to Search Start Over

MFU-Net: a deep multimodal fusion network for breast cancer segmentation with dual-layer spectral detector CT.

Authors :
Yang, Aisen
Xu, Lulu
Qin, Na
Huang, Deqing
Liu, Ziyi
Shu, Jian
Source :
Applied Intelligence; Mar2024, Vol. 54 Issue 5, p3808-3824, 17p
Publication Year :
2024

Abstract

With the development of medical imaging technologies, breast cancer segmentation remains challenging, especially when considering multimodal imaging. Compared to a single-modality image, multimodal data provide additional information, contributing to better representation learning capabilities. This paper applies these advantages by presenting a deep learning network architecture for segmenting breast cancer with multimodal computed tomography (CT) images based on fusing U-Net architectures that can learn richer representations from multimodal data. The multipath fusion architecture introduces an additional fusion module across different paths, enabling the model to extract features from different modalities at each level of the encoding path. This approach enhances segmentation performance and produces more robust results compared to using a single modality. The study reports experiments conducted on multimodal CT images from 36 patients for training, validation, and testing purposes. The results demonstrate that the proposed model ouperforms the U-Net architecture when considering different combinations of input image modalities. Specifically, when combining two distinct CT modalities, the ZE and IoNW input combination yields the highest Dice score of 0.8546. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0924669X
Volume :
54
Issue :
5
Database :
Complementary Index
Journal :
Applied Intelligence
Publication Type :
Academic Journal
Accession number :
176998877
Full Text :
https://doi.org/10.1007/s10489-023-05090-6