1. Attention‐guided multi‐scale context aggregation network for multi‐modal brain glioma segmentation.
- Author
-
Wu, Shaozhi, Cao, Yunjian, Li, Xinke, Liu, Qiyu, Ye, Yuyun, Liu, Xingang, Zeng, Liaoyuan, and Tian, Miao
- Subjects
- *
GLIOMAS , *BRAIN tumors , *CONVOLUTIONAL neural networks , *MAGNETIC resonance imaging , *PERFORMANCE standards - Abstract
Background: Accurate segmentation of brain glioma is a critical prerequisite for clinical diagnosis, surgical planning and treatment evaluation. In current clinical workflow, physicians typically perform delineation of brain tumor subregions slice‐by‐slice, which is more susceptible to variabilities in raters and also time‐consuming. Besides, even though convolutional neural networks (CNNs) are driving progress, the performance of standard models still have some room for further improvement. Purpose: To deal with these issues, this paper proposes an attention‐guided multi‐scale context aggregation network (AMCA‐Net) for the accurate segmentation of brain glioma in the magnetic resonance imaging (MRI) images with multi‐modalities. Methods: AMCA‐Net extracts the multi‐scale features from the MRI images and fuses the extracted discriminative features via a self‐attention mechanism for brain glioma segmentation. The extraction is performed via a series of down‐sampling, convolution layers, and the global context information guidance (GCIG) modules are developed to fuse the features extracted for contextual features. At the end of the down‐sampling, a multi‐scale fusion (MSF) module is designed to exploit and combine all the extracted multi‐scale features. Each of the GCIG and MSF modules contain a channel attention (CA) module that can adaptively calibrate feature responses and emphasize the most relevant features. Finally, multiple predictions with different resolutions are fused through different weightings given by a multi‐resolution adaptation (MRA) module instead of the use of averaging or max‐pooling to improve the final segmentation results. Results: Datasets used in this paper are publicly accessible, that is, the Multimodal Brain Tumor Segmentation Challenges 2018 (BraTS2018) and 2019 (BraTS2019). BraTS2018 contains 285 patient cases and BraTS2019 contains 335 cases. Simulations show that the AMCA‐Net has better or comparable performance against that of the other state‐of‐the‐art models. In terms of the Dice score and Hausdorff 95 for the BraTS2018 dataset, 90.4% and 10.2 mm for the whole tumor region (WT), 83.9% and 7.4 mm for the tumor core region (TC), 80.2% and 4.3 mm for the enhancing tumor region (ET), whereas the Dice score and Hausdorff 95 for the BraTS2019 dataset, 91.0% and 10.7 mm for the WT, 84.2% and 8.4 mm for the TC, 80.1% and 4.8 mm for the ET. Conclusions: The proposed AMCA‐Net performs comparably well in comparison to several state‐of‐the‐art neural net models in identifying the areas involving the peritumoral edema, enhancing tumor, and necrotic and non‐enhancing tumor core of brain glioma, which has great potential for clinical practice. In future research, we will further explore the feasibility of applying AMCA‐Net to other similar segmentation tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF