1. MVFusFra: A Multi-View Dynamic Fusion Framework for Multimodal Brain Tumor Segmentation
- Author
-
Zheng Wei, Yi Ding, Zhen Qin, Ji Geng, Zhiguang Qin, Xiaolin Hou, and Kim-Kwang Raymond Choo
- Subjects
Fusion ,Artificial neural network ,Brain Neoplasms ,Computer science ,business.industry ,Process (computing) ,Brain ,Pattern recognition ,Magnetic Resonance Imaging ,Computer Science Applications ,Consistency (database systems) ,Health Information Management ,Market segmentation ,Image Processing, Computer-Assisted ,Key (cryptography) ,Fuse (electrical) ,Humans ,Segmentation ,Neural Networks, Computer ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Biotechnology - Abstract
Medical practitioners generally rely on multimodal brain images, for example based on the information from the axial, coronal, and sagittal views, to inform brain tumor diagnosis. Hence, to further utilize the 3D information embedded in such datasets, this paper proposes a multi-view dynamic fusion framework (hereafter, referred to as MVFusFra) to improve the performance of brain tumor segmentation. The proposed framework consists of the following three key building blocks. First, a multi-view deep neural network architecture, which represents multi learning networks for segmenting the brain tumor from different views and each deep neural network corresponds to multi-modal brain images from one single view. Second, the dynamic decision fusion method, which is mainly used to fuse segmentation results from multi-views into an integrated method. Then, two different fusion methods (i.e., voting and weighted averaging) are used to evaluate the fusing process. Third, the multi-view fusion loss (comprising segmentation loss, transition loss, and decision loss) is proposed to facilitate the training process of multi-view learning networks, so as to ensure consistency in appearance and space, for both fusing segmentation results and the training of the learning network. We evaluate the performance of MVFusFra on the BRATS 2015 and BRATS 2018 datasets. Findings from the evaluations suggest that fusion results from multi-views achieve better performance than segmentation results from the single view, and also implying effectiveness of the proposed multi-view fusion loss. A comparative summary also shows that MVFusFra achieves better segmentation performance, in terms of efficiency, in comparison to other competing approaches.
- Published
- 2022