1,576 results on '"brain tumor segmentation"'
Search Results
102. FedGrav: An Adaptive Federated Aggregation Algorithm for Multi-institutional Medical Image Segmentation
- Author
-
Deng, Zhifang, Li, Dandan, Tan, Shi, Fu, Ying, Yuan, Xueguang, Huang, Xiaohong, Zhang, Yong, Zhou, Guangwei, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Greenspan, Hayit, editor, Madabhushi, Anant, editor, Mousavi, Parvin, editor, Salcudean, Septimiu, editor, Duncan, James, editor, Syeda-Mahmood, Tanveer, editor, and Taylor, Russell, editor
- Published
- 2023
- Full Text
- View/download PDF
103. Brain Tumor Segmentation Using Ensemble Deep Neural Networks with MRI Images
- Author
-
Weiss Cohen, Miri, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Rojas, Ignacio, editor, Joya, Gonzalo, editor, and Catala, Andreu, editor
- Published
- 2023
- Full Text
- View/download PDF
104. Brain Tumor Image Segmentation Network Based on Dual Attention Mechanism
- Author
-
He, Fuyun, Zhang, Yao, Wei, Yan, Qian, Youwei, Hu, Cong, Tang, Xiaohu, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Premaratne, Prashan, editor, Jin, Baohua, editor, Qu, Boyang, editor, Jo, Kang-Hyun, editor, and Hussain, Abir, editor
- Published
- 2023
- Full Text
- View/download PDF
105. Tuning U-Net for Brain Tumor Segmentation
- Author
-
Futrega, Michał, Marcinkiewicz, Michał, Ribalta, Pablo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bakas, Spyridon, editor, Crimi, Alessandro, editor, Baid, Ujjwal, editor, Malec, Sylwia, editor, Pytlarz, Monika, editor, Baheti, Bhakti, editor, Zenk, Maximilian, editor, and Dorent, Reuben, editor
- Published
- 2023
- Full Text
- View/download PDF
106. Brain Tumor Segmentation Using Neural Ordinary Differential Equations with UNet-Context Encoding Network
- Author
-
Sadique, M. S., Rahman, M. M., Farzana, W., Temtam, A., Iftekharuddin, K. M., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bakas, Spyridon, editor, Crimi, Alessandro, editor, Baid, Ujjwal, editor, Malec, Sylwia, editor, Pytlarz, Monika, editor, Baheti, Bhakti, editor, Zenk, Maximilian, editor, and Dorent, Reuben, editor
- Published
- 2023
- Full Text
- View/download PDF
107. An UNet-Based Brain Tumor Segmentation Framework via Optimal Mass Transportation Pre-processing
- Author
-
Liao, Jia-Wei, Huang, Tsung-Ming, Li, Tiexiang, Lin, Wen-Wei, Wang, Han, Yau, Shing-Tung, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bakas, Spyridon, editor, Crimi, Alessandro, editor, Baid, Ujjwal, editor, Malec, Sylwia, editor, Pytlarz, Monika, editor, Baheti, Bhakti, editor, Zenk, Maximilian, editor, and Dorent, Reuben, editor
- Published
- 2023
- Full Text
- View/download PDF
108. Multi-modal Transformer for Brain Tumor Segmentation
- Author
-
Cho, Jihoon, Park, Jinah, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bakas, Spyridon, editor, Crimi, Alessandro, editor, Baid, Ujjwal, editor, Malec, Sylwia, editor, Pytlarz, Monika, editor, Baheti, Bhakti, editor, Zenk, Maximilian, editor, and Dorent, Reuben, editor
- Published
- 2023
- Full Text
- View/download PDF
109. An Efficient Cascade of U-Net-Like Convolutional Neural Networks Devoted to Brain Tumor Segmentation
- Author
-
Bouchet, Philippe, Deloges, Jean-Baptiste, Canton-Bacara, Hugo, Pusel, Gaëtan, Pinot, Lucas, Elbaz, Othman, Boutry, Nicolas, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bakas, Spyridon, editor, Crimi, Alessandro, editor, Baid, Ujjwal, editor, Malec, Sylwia, editor, Pytlarz, Monika, editor, Baheti, Bhakti, editor, Zenk, Maximilian, editor, and Dorent, Reuben, editor
- Published
- 2023
- Full Text
- View/download PDF
110. Diffraction Block in Extended nn-UNet for Brain Tumor Segmentation
- Author
-
Hou, Qingfan, Wang, Zhuofei, Wang, Jiao, Jiang, Jian, Peng, Yanjun, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bakas, Spyridon, editor, Crimi, Alessandro, editor, Baid, Ujjwal, editor, Malec, Sylwia, editor, Pytlarz, Monika, editor, Baheti, Bhakti, editor, Zenk, Maximilian, editor, and Dorent, Reuben, editor
- Published
- 2023
- Full Text
- View/download PDF
111. Brain Tumor Segmentation Using 3D Attention U Net
- Author
-
Chinnam, Siva Koteswara Rao, Sistla, Venkatramaphanikumar, Kolli, Venkata Krishna Kishore, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Garg, Deepak, editor, Narayana, V. A., editor, Suganthan, P. N., editor, Anguera, Jaume, editor, Koppula, Vijaya Kumar, editor, and Gupta, Suneet Kumar, editor
- Published
- 2023
- Full Text
- View/download PDF
112. Semi-Supervised Medical Image Segmentation on Data from Different Distributions
- Author
-
Sowmya, K, Varaprasad, G., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Senjyu, Tomonobu, editor, So–In, Chakchai, editor, and Joshi, Amit, editor
- Published
- 2023
- Full Text
- View/download PDF
113. Efficient Segmentation of Tumor with Convolutional Neural Network in Brain MRI Images
- Author
-
Ingle, Archana, Roja, Mani, Sankhe, Manoj, Patkar, Deepak, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Kumar, Sandeep, editor, Sharma, Harish, editor, Balachandran, K., editor, Kim, Joong Hoon, editor, and Bansal, Jagdish Chand, editor
- Published
- 2023
- Full Text
- View/download PDF
114. Brain Tumor Segmentation Using Deep Neural Networks: A Comparative Study
- Author
-
Kumar Gautam, Pankaj, Goyal, Rishabh, Upadhyay, Udit, Naik, Dinesh, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Singh, Pradeep, editor, Singh, Deepak, editor, Tiwari, Vivek, editor, and Misra, Sanjay, editor
- Published
- 2023
- Full Text
- View/download PDF
115. Brain Tumor Segmentation Using Fully Convolution Neural Network
- Author
-
Kapdi, Rupal A., Patel, Jigna A., Patel, Jitali, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Singh, Yashwant, editor, Singh, Pradeep Kumar, editor, Kolekar, Maheshkumar H., editor, Kar, Arpan Kumar, editor, and Gonçalves, Paulo J. Sequeira, editor
- Published
- 2023
- Full Text
- View/download PDF
116. Comparison Performance of Deep Learning Models for Brain Tumor Segmentation Based on 2D Convolutional Neural Network
- Author
-
Hardani, Dian Nova Kusuma, Nugroho, Hanung Adi, Ardiyanto, Igi, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Triwiyanto, Triwiyanto, editor, Rizal, Achmad, editor, and Caesarendra, Wahyu, editor
- Published
- 2023
- Full Text
- View/download PDF
117. Sub-region Segmentation of Brain Tumors from Multimodal MRI Images Using 3D U-Net
- Author
-
Ali, Ammar Alhaj, Katta, Rasin, Jasek, Roman, Chramco, Bronislav, Krayem, Said, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Silhavy, Radek, editor, Silhavy, Petr, editor, and Prokopova, Zdenka, editor
- Published
- 2023
- Full Text
- View/download PDF
118. Brain Tumor Segmentation Using U-Net
- Author
-
Jyothsna, Paturi, Spandhana, Mamidi Sai Sri Venkata, Jayasri, Rayi, Sandeep, Nirujogi Venkata Sai, Swathi, K., Marline Joys Kumari, N., Rao, N. Thirupathi, Bhattacharyya, Debnath, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Ogudo, Kingsley A., editor, Saha, Sanjoy Kumar, editor, and Bhattacharyya, Debnath, editor
- Published
- 2023
- Full Text
- View/download PDF
119. Advancements in deep learning techniques for brain tumor segmentation: A survey
- Author
-
Chandrakant M. Umarani, S.G. Gollagi, Shridhar Allagi, Kuldeep Sambrekar, and Sanjay B. Ankali
- Subjects
Brain tumor segmentation ,U-Net architecture ,Self-attention mechanisms ,MRI image analysis ,Deep learning techniques ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
The escalating incidence of accurate detection of brain tumors within the discipline of neuro-oncology underscores the pressing demand for enhanced diagnostic methodologies. The extant corpus of literature, predominantly focused on the categorization of MRI images, is deficient in comprehensive solutions for the myriad challenges faced in the segmentation of brain tumors, including imaging anomalies, the nuanced delineation of tumor margins, tumor heterogeneity, and classification ambiguities. The aim of this research endeavor is to address these challenges by proposing a novel deep learning framework that integrates the renowned U-Net architecture with self-attention mechanisms, meticulously tailored for the segmentation of brain tumors.Our study rigorously evaluates and contrasts prevailing deep learning techniques, with an emphasis on the efficacy of the U-Net architecture in discerning both specific and generalized features within three-dimensional brain imaging. The integration of self-attention mechanisms is demonstrably effective in augmenting segmentation accuracy by directing focus towards pivotal tumor regions and enhancing overall precision. Principal findings reveal that our proposed model surpasses recent developments in brain tumor segmentation from the years 2020–2024 in metrics of accuracy, precision, sensitivity, and specificity. Significant conclusions indicate that this amalgamation establishes a novel benchmark in medical image segmentation, possessing the capacity to revolutionize diagnostic capabilities and therapeutic strategies. The ramifications extend beyond academic discourse, offering a glimmer of optimism for patients and healthcare professionals alike in the precise diagnosis and management of brain tumors.
- Published
- 2024
- Full Text
- View/download PDF
120. Automated multi-class high-grade glioma segmentation based on T1Gd and FLAIR images
- Author
-
Areen K. Al-Bashir, Abeer N. Al Obeid, Mohammad A. Al-Abed, Imad S. Athamneh, Maysoon A-R. Banihani, and Rabah M. Al Abdi
- Subjects
Brain tumor segmentation ,Convolutional neural networks (CNNs) ,HGG ,Pre-trained CNNs ,U-net ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Glioma is the most prevalent primary malignant brain tumor. Segmentation of glioma regions using magnetic resonance imaging (MRI) is essential for treatment planning. However, segmentation of glioma regions is usually based on four MRI modalities, which are T1, T2, T1Gd, and FLAIR. Acquiring these four modalities will increase patients' time inside the scanner and drive up the segmentation process's processing time. Nevertheless, not all these modalities are acquired in some cases due to the limited available time on the MRI scanner or uncooperative patients. Therefore, U-Net-based fully convolutional neural networks were employed for automated segmentation to answer the urgent question: does a smaller number of MRI modalities limit the segmentation accuracy? The proposed approach was trained, validated, and tested on 100 high-grade glioma (HGG) cases twice, once with all MRI sequences and second with only FLAIR and T1Gd. The results on the test set showed that the baseline U-Net model gave a mean Dice score of 0.9166 and 0.9190 on all MRI sequences using FLAIR and T1Gd, respectively. To check for possible performance improvement of the U-Net on FLAIR and T1Gd modalities, an ensemble of pre-trained VGG16, VGG19, and ResNet50 as modified U-Net encoders were employed for automated glioma segmentation based on T1Gd and FLAIR modalities only and compared with the baseline U-Net. The proposed models were trained, validated, and tested on 259 high-grade gliomas (HGG) cases. The results showed that the proposed baseline U-Net model and the ensemble of pre-trained VGG16, VGG19, or ResNet50 as modified U-Net encoders have a mean Dice score of 0.9395, 0.9360, 0.9359, and 0.9356, respectively. The results were also compared to other studies based on four MRI modalities. The work indicates that FLAIR and T1Gd are the most prominent contributors to the segmentation process. The proposed baseline U-Net is robust enough for segmenting HGG sub-tumoral structures and competitive with other state-of-the-art works.
- Published
- 2024
- Full Text
- View/download PDF
121. Augmented Transformer network for MRI brain tumor segmentation
- Author
-
Muqing Zhang, Dongwei Liu, Qiule Sun, Yutong Han, Bin Liu, Jianxin Zhang, and Mingli Zhang
- Subjects
Brain tumor segmentation ,U-Net ,Transformer ,CNNs ,Augmented Shortcuts ,Paired attention ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The Augmented Transformer U-Net (AugTransU-Net) is proposed to address limitations in existing transformer-related U-Net models for brain tumor segmentation. While previous models effectively capture long-range dependencies and global context, these works ignore the hierarchy to a certain degree and need more feature diversity as depth increases. The proposed AugTransU-Net integrates two advanced transformer modules into different positions within a U-shaped architecture to overcome these issues. The fundamental innovation lies in constructing improved augmentation transformer modules that incorporate Augmented Shortcuts into standard transformer blocks. These augmented modules are strategically placed at the bottleneck of the segmentation network, forming multi-head self-attention blocks and circulant projections, aiming to maintain feature diversity and enhance feature interaction and diversity. Furthermore, paired attention modules operate from low to high layers throughout the network, establishing long-range relationships in both spatial and channel dimensions. This allows each layer to comprehend the overall brain tumor structure and capture semantic information at critical locations. Experimental results demonstrate the effectiveness and competitiveness of AugTransU-Net in comparison to representative works. The model achieves Dice values of 89.7%/89.8%, 78.2%/78.6%, and 80.4%/81.9% for whole tumor (WT), enhancing tumor (ET) and tumor core (TC) segmentation on the BraTS2019-2020 validation datasets, respectively. The code for AugTransU-Net will be made publicly available at https://github.com/MuqinZ/AugTransUnet.
- Published
- 2024
- Full Text
- View/download PDF
122. RFS+: A Clinically Adaptable and Computationally Efficient Strategy for Enhanced Brain Tumor Segmentation.
- Author
-
Duman, Abdulkerim, Karakuş, Oktay, Sun, Xianfang, Thomas, Solly, Powell, James, and Spezi, Emiliano
- Subjects
- *
DEEP learning , *DIGITAL image processing , *MAGNETIC resonance imaging , *BRAIN tumors , *DATABASE management , *AUTOMATION , *RESEARCH funding ,BRAIN tumor diagnosis - Abstract
Simple Summary: In our study, we addressed the challenge of the brain tumor segmentation task using a range of MRI modalities. While leading models show proficiency on standardized datasets, their versatility across different clinical environments remains uncertain. We introduced 'Region-Focused Selection Plus (RFS+)', enhancing the segmentation performance for clinically defined labels like gross tumor volume in our local dataset. RFS+ integrates segmentation approaches and normalization techniques, leveraging the strengths of each approach and minimizing their drawbacks by selecting the top three models. RFS+ demonstrated efficient brain tumor segmentation, using 67% less memory and requiring 92% less training time than the state-of-the-art model. The strategy achieved better performance compared to the leading model, with a 79.22% dice score. These findings highlight the potential of RFS+ in amplifying the adaptability of deep learning models for brain tumor segmentation in clinical applications. However, further research is needed to validate the broader clinical efficacy of RFS+. Automated brain tumor segmentation has significant importance, especially for disease diagnosis and treatment planning. The study utilizes a range of MRI modalities, namely T1-weighted (T1), T1-contrast-enhanced (T1ce), T2-weighted (T2), and fluid-attenuated inversion recovery (FLAIR), with each providing unique and vital information for accurate tumor localization. While state-of-the-art models perform well on standardized datasets like the BraTS dataset, their suitability in diverse clinical settings (matrix size, slice thickness, manufacturer-related differences such as repetition time, and echo time) remains a subject of debate. This research aims to address this gap by introducing a novel 'Region-Focused Selection Plus (RFS+)' strategy designed to efficiently improve the generalization and quantification capabilities of deep learning (DL) models for automatic brain tumor segmentation. RFS+ advocates a targeted approach, focusing on one region at a time. It presents a holistic strategy that maximizes the benefits of various segmentation methods by customizing input masks, activation functions, loss functions, and normalization techniques. Upon identifying the top three models for each specific region in the training dataset, RFS+ employs a weighted ensemble learning technique to mitigate the limitations inherent in each segmentation approach. In this study, we explore three distinct approaches, namely, multi-class, multi-label, and binary class for brain tumor segmentation, coupled with various normalization techniques applied to individual sub-regions. The combination of different approaches with diverse normalization techniques is also investigated. A comparative analysis is conducted among three U-net model variants, including the state-of-the-art models that emerged victorious in the BraTS 2020 and 2021 challenges. These models are evaluated using the dice similarity coefficient (DSC) score on the 2021 BraTS validation dataset. The 2D U-net model yielded DSC scores of 77.45%, 82.14%, and 90.82% for enhancing tumor (ET), tumor core (TC), and the whole tumor (WT), respectively. Furthermore, on our local dataset, the 2D U-net model augmented with the RFS+ strategy demonstrates superior performance compared to the state-of-the-art model, achieving the highest DSC score of 79.22% for gross tumor volume (GTV). The model utilizing RFS+ requires 10% less training dataset, 67% less memory and completes training in 92% less time compared to the state-of-the-art model. These results confirm the effectiveness of the RFS+ strategy for enhancing the generalizability of DL models in brain tumor segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
123. Learning intra-inter-modality complementary for brain tumor segmentation.
- Author
-
Zheng, Jiangpeng, Shi, Fan, Zhao, Meng, Jia, Chen, and Wang, Congcong
- Subjects
- *
BRAIN tumors , *IMAGE segmentation , *DIAGNOSTIC imaging , *MAGNETIC resonance imaging - Abstract
Multi-modal MRI has become a valuable tool in medical imaging for diagnosing and investigating brain tumors, as it provides complementary information from multiple modalities. However, traditional methods for multi-modal MRI segmentation using UNet architecture typically fuse the modalities at an early or mid-stage of the network, without considering the inter-modal feature fusion or dependencies. To address this, a novel CMMFNet (cross-modal multi-scale fusion network) is proposed in this work, which explores both intra-modality and inter-modality relationships in brain tumor segmentation. The network is built on a transformer-based multi-encoder and single-decoder structure, which performs nested multi-modal fusion for high-level representations of different modalities. Additionally, the proposed CMMFNet uses a focusing mechanism that extracts larger receptive fields more effectively at the low-level scale and connects them to the decoding layer effectively. The multi-modal feature fusion module nests modality-aware feature aggregation, and the multi-modal features are better fused through long-term dependencies within each modality in the self-attention and cross-attention layers. The experiments showed that our CMMFNet outperformed state-of-the-art methods on the BraTS2020 benchmark dataset in brain tumor segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
124. EFFICIENT CLUSTERING OF BRAIN TUMOR SEGMENTS USING LEVEL-SET HYBRID MACHINE LEARNING ALGORITHMS.
- Author
-
HE LVLIANG and WU HUA
- Subjects
MACHINE learning ,BRAIN tumors ,WAVELET transforms ,FEATURE extraction ,DISCRETE wavelet transforms ,K-means clustering ,DEEP learning - Abstract
Cluster computing is an essential technology in distributed environments for practical data analysis in complex datasets like tumor segmentation, disease classification etc. Today real-world applications like medicine and transport are needed for big data analytics environments. This research article considers complex image data environments like brain tumor segmentation based on advanced clustering techniques for effective tumor prediction. An a-state-of-art analysis used Hierarchical clustering to extract initial tumor segments from the image. The next segment is further refined using novel Noise detection-based levelsetting techniques. The unsupervised Fuzzy C-means and k-means clustering is used to segment the diseases affected region to enhance noise detection used in the level set. Effective features are extracted using gray level co-occurrence matrix and redundant discrete wavelet transform. Finally, classifying malignant and benign brain tumor images is done using deep probabilistic neural networks. Publicly available datasets are used to validate the proposed algorithms. Experimental results prove that proposed pipeline techniques have effective performance in tumor segmentation and classification model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
125. SCAU-net: 3D self-calibrated attention U-Net for brain tumor segmentation.
- Author
-
Liu, Dongwei, Sheng, Ning, Han, Yutong, Hou, Yaqing, Liu, Bin, Zhang, Jianxin, and Zhang, Qiang
- Subjects
- *
BRAIN tumors , *IMAGE segmentation , *RESEARCH personnel , *MARKOV random fields - Abstract
Recently, U-Net architecture with its strong adaptability has become prevalent in the field of MRI brain tumor segmentation. Meanwhile, researchers have demonstrated that introducing attention mechanisms, especially self-attention, into U-Net can effectively improve the performance of segmentation models. However, the self-attention has disadvantages of heavy computational burden, quadratic complexity as well as ignoring the potential correlations between different samples. Besides, current attention segmentation models seldom focus on adaptively computing the receptive field of tumor images that may capture discriminant information effectively. To address these issues, we propose a novel 3D U-Net related brain tumor segmentation model dubbed as self-calibrated attention U-Net (SCAU-Net) in this work, which simultaneously introduces two lightweight modules, i.e., external attention module and self-calibrated convolution module, into a single U-Net. More specifically, SCAU-Net embeds the external attention into the skip connection to better utilize encoding features for semantic up-sampling, and it leverages several 3D self-calibrated convolution modules to replace the original convolution layers, which adaptively computes the receptive field of tumor images for effective segmentation. SCAU-Net achieves segmentation results on the BraTS 2020 validation dataset with the dice similarity coefficient of 0.905, 0.821 and 0.781 and the 95% Hausdorff distance (HD95) of 4.0, 9.7 and 29.3 on the whole tumor, tumor core and enhancing tumor, respectively. Similarly, competitive results are obtained on BraTS 2018 and BraTS 2019 validation datasets. Experimental results demonstrate that SCAU-Net outperforms its baseline and achieves outstanding performance compared to various representative brain tumor models. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
126. A hybrid weighted fuzzy approach for brain tumor segmentation using MR images.
- Author
-
Chahal, Prabhjot Kaur and Pandey, Shreelekha
- Subjects
- *
COMPUTER-aided diagnosis , *BRAIN tumors , *MAGNETIC resonance imaging , *IMAGE segmentation , *MEDICAL communication , *TUMOR classification - Abstract
Human brain tumor detection and classification are time-consuming however vital tasks for any medical expert. Assistance via computer aided diagnosis is commonly used to enhance diagnosis capabilities attaining maximum detection accuracy. Despite significant research, brain tumor segmentation is still an open challenge due to variability in image modality, contrast, tumor type, and other factors. Many great works ranging from manual, semiautomatic, or fully automatic tumor segmentation with magnetic resonance (MR) brain images are available, however, still creating a space for developing efficient and accurate approaches in this domain. This manuscript proposes a hybrid weighted fuzzy k-means (WFKM) brain tumor segmentation algorithm using MR images to retrieve more meaningful clusters. It is based on fuzzification of weights which works on spatial context with illumination penalize membership approach which helps in settling issues with pixel's multiple memberships as well as exponential increase in number of iterations. The segmented image is further utilized for successful tumor type identification as benign or malignant by means of SVM. Experimentation performed on MR images using Digital Imaging and Communications in Medicine (DICOM) dataset shows that fusion of proposed WFKM and SVM outperforms many existing approaches. Further, performance evaluation parameters show that the proposal produces better overall accuracy. Results on variety of images further prove applicability of the proposal in detecting ranges and shapes of brain tumor. The proposed approach excels qualitatively as well as quantitatively reporting an average accuracy of 97% on DICOM dataset with total number of images varying from 100 to 1000. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
127. Brain tumor image segmentation based on prior knowledge via transformer.
- Author
-
Li, Qiang, Liu, Hengxin, Nie, Weizhi, and Wu, Ting
- Subjects
- *
BRAIN tumors , *PRIOR learning , *BRAIN imaging , *IMAGE segmentation , *MAGNETIC resonance imaging , *RECOMMENDER systems , *IMAGE fusion - Abstract
Many researchers use AI to improve the accuracy of early diagnostic techniques. However, as a result of the tumor's uneven shape, fuzzy borders and too few data, existing tumor segmentation methods do not propose accurate segmentation results. We innovative introduces the prior knowledge learned to filter the noise information and guide the final network to generate a more accurate segmentation model. First, we introduce a classification network with an attention block to highlight the potential location of the brain tumor and also obtain the rough diagnosis result as the prior knowledge. Second, we provide a novel image fusion network consisting of a transformer with cross attention to merge tumor localization information with brain MRI images. Third, we propose a novel multilayer transformer experience information fusion network to combine the classic U‐Net network to handle the guiding of prior knowledge. The higher performance of the suggested method is demonstrated by comparison with contemporary methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
128. A NOVEL DEEP LEARNING METHOD FOR BRAIN TUMOR SEGMENTATION IN MAGNETIC RESONANCE IMAGES BASED ON RESIDUAL UNITS AND MODIFIED U-NET MODEL.
- Author
-
CHEN, YUXUAN, CHEN, YUNYI, CHEN, JIAN, HUANG, CHENXI, WANG, BIN, and CUI, XU
- Subjects
- *
DEEP learning , *MAGNETIC resonance imaging , *BRAIN tumors , *TUMOR diagnosis , *ORGANS (Anatomy) , *DATA mining - Abstract
Brain tumors are among the most deadly forms of cancer, as the brain is a crucial organ for human activity. Early detection and treatment are key to recovery. An expert's final decision on tumor diagnosis mainly depends on the evaluation of Magnetic Resonance Imaging (MRI) images. However, the traditional manual assessment process is time-consuming, error-prone, and relies on the experience and knowledge of doctors, along with other unstable factors. An automated brain tumor detection system can assist radiologists and internal medicine experts in detecting and diagnosing brain tumors. This study proposes a novel deep learning model that combines residual units with a modified U-Net framework for brain tumor segmentation tasks in brain MR images. In this study, the U-Net-based framework is implemented with a stack of neural units and residual units and uses Leaky Rectified Linear Unit (LReLU) as the model's activation function. First, neural units are added before the first layer of downsampling and upsampling to enhance feature propagation and reuse. Then, the stacking of residual blocks is applied to achieve deep semantic information extraction for downsampling and pixel classification for upsampling. Finally, a single-layer convolution outputs the predicted segmented images. The experimental results show that the segmentation Dice Similarity Coefficient of this model is 90.79%, and the model demonstrates better segmentation accuracy than other research models. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
129. Brain tumor image segmentation based on improved FPN.
- Author
-
Sun, Haitao, Yang, Shuai, Chen, Lijuan, Liao, Pingyan, Liu, Xiangping, Liu, Ying, and Wang, Ning
- Subjects
BRAIN tumors ,CONVOLUTIONAL neural networks ,IMAGE segmentation ,MACHINE learning ,BRAIN imaging ,DEEP learning - Abstract
Purpose: Automatic segmentation of brain tumors by deep learning algorithm is one of the research hotspots in the field of medical image segmentation. An improved FPN network for brain tumor segmentation is proposed to improve the segmentation effect of brain tumor. Materials and methods: Aiming at the problem that the traditional full convolutional neural network (FCN) has weak processing ability, which leads to the loss of details in tumor segmentation, this paper proposes a brain tumor image segmentation method based on the improved feature pyramid networks (FPN) convolutional neural network. In order to improve the segmentation effect of brain tumors, we improved the model, introduced the FPN structure into the U-Net structure, captured the context multi-scale information by using the different scale information in the U-Net model and the multi receptive field high-level features in the FPN convolutional neural network, and improved the adaptability of the model to different scale features. Results: Performance evaluation indicators show that the proposed improved FPN model has 99.1% accuracy, 92% DICE rating and 86% Jaccard index. The performance of the proposed method outperforms other segmentation models in each metric. In addition, the schematic diagram of the segmentation results shows that the segmentation results of our algorithm are closer to the ground truth, showing more brain tumour details, while the segmentation results of other algorithms are smoother. Conclusions: The experimental results show that this method can effectively segment brain tumor regions and has certain generalization, and the segmentation effect is better than other networks. It has positive significance for clinical diagnosis of brain tumors. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
130. Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation.
- Author
-
Li, Hao, Nan, Yang, Del Ser, Javier, and Yang, Guang
- Subjects
- *
BRAIN tumors , *DEEP learning , *IMAGE segmentation , *IMAGE recognition (Computer vision) , *QUANTILE regression , *BAYESIAN analysis - Abstract
Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
131. Enhancing Brain Tumor Segmentation Accuracy through Scalable Federated Learning with Advanced Data Privacy and Security Measures.
- Author
-
Ullah, Faizan, Nadeem, Muhammad, Abrar, Mohammad, Amin, Farhan, Salam, Abdu, and Khan, Salabat
- Subjects
- *
CONVOLUTIONAL neural networks , *BRAIN tumors , *DATA security , *SECURITY systems , *COMPUTER-assisted image analysis (Medicine) , *IMAGE segmentation , *DATA privacy - Abstract
Brain tumor segmentation in medical imaging is a critical task for diagnosis and treatment while preserving patient data privacy and security. Traditional centralized approaches often encounter obstacles in data sharing due to privacy regulations and security concerns, hindering the development of advanced AI-based medical imaging applications. To overcome these challenges, this study proposes the utilization of federated learning. The proposed framework enables collaborative learning by training the segmentation model on distributed data from multiple medical institutions without sharing raw data. Leveraging the U-Net-based model architecture, renowned for its exceptional performance in semantic segmentation tasks, this study emphasizes the scalability of the proposed approach for large-scale deployment in medical imaging applications. The experimental results showcase the remarkable effectiveness of federated learning, significantly improving specificity to 0.96 and the dice coefficient to 0.89 with the increase in clients from 50 to 100. Furthermore, the proposed approach outperforms existing convolutional neural network (CNN)- and recurrent neural network (RNN)-based methods, achieving higher accuracy, enhanced performance, and increased efficiency. The findings of this research contribute to advancing the field of medical image segmentation while upholding data privacy and security. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
132. Brain Tumor Segmentation for Multi-Modal MRI with Missing Information.
- Author
-
Feng, Xue, Ghimire, Kanchan, Kim, Daniel D., Chandra, Rajat S., Zhang, Helen, Peng, Jian, Han, Binghong, Huang, Gaofeng, Chen, Quan, Patel, Sohil, Bettagowda, Chetan, Sair, Haris I., Jones, Craig, Jiao, Zhicheng, Yang, Li, and Bai, Harrison
- Subjects
BRAIN tumor diagnosis ,MAGNETIC resonance imaging ,TUMOR classification ,BRAIN tumors - Abstract
Deep convolutional neural networks (DCNNs) have shown promise in brain tumor segmentation from multi-modal MRI sequences, accommodating heterogeneity in tumor shape and appearance. The fusion of multiple MRI sequences allows networks to explore complementary tumor information for segmentation. However, developing a network that maintains clinical relevance in situations where certain MRI sequence(s) might be unavailable or unusual poses a significant challenge. While one solution is to train multiple models with different MRI sequence combinations, it is impractical to train every model from all possible sequence combinations. In this paper, we propose a DCNN-based brain tumor segmentation framework incorporating a novel sequence dropout technique in which networks are trained to be robust to missing MRI sequences while employing all other available sequences. Experiments were performed on the RSNA-ASNR-MICCAI BraTS 2021 Challenge dataset. When all MRI sequences were available, there were no significant differences in performance of the model with and without dropout for enhanced tumor (ET), tumor (TC), and whole tumor (WT) (p-values 1.000, 1.000, 0.799, respectively), demonstrating that the addition of dropout improves robustness without hindering overall performance. When key sequences were unavailable, the network with sequence dropout performed significantly better. For example, when tested on only T1, T2, and FLAIR sequences together, DSC for ET, TC, and WT increased from 0.143 to 0.486, 0.431 to 0.680, and 0.854 to 0.901, respectively. Sequence dropout represents a relatively simple yet effective approach for brain tumor segmentation with missing MRI sequences. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
133. Clinical Knowledge-Based Hybrid Swin Transformer for Brain Tumor Segmentation.
- Author
-
Xiaoliang Lei, Xiaosheng Yu, Hao Wu, Chengdong Wu, and Jingsi Zhang
- Subjects
TRANSFORMER models ,CONVOLUTIONAL neural networks ,BRAIN tumors ,MAGNETIC resonance imaging ,THREE-dimensional imaging - Abstract
Accurate tumor segmentation from brain tissues in Magnetic Resonance Imaging (MRI) imaging is crucial in the pre-surgical planning of brain tumor malignancy. MRI images' heterogeneous intensity and fuzzy boundaries make brain tumor segmentation challenging. Furthermore, recent studies have yet to fully employMRI sequences' considerable and supplementary information, which offers critical a priori knowledge. This paper proposes a clinical knowledge-based hybrid Swin Transformermultimodal brain tumor segmentation algorithmbased on how experts identify malignancies from MRI images. During the encoder phase, a dual backbone network with a Swin Transformer backbone to capture long dependencies from 3D MR images and a Convolutional Neural Network (CNN)-based backbone to represent local features have been constructed. Instead of directly connecting all the MRI sequences, the proposed method re-organizes them and splits them into two groups based onMRI principles and characteristics: T1 and T1ce, T2 and Flair. These aggregated images are received by the dual-stem Swin Transformer-based encoder branch, and the multimodal sequence-interacted cross-attention module (MScAM) captures the interactive information between two sets of linked modalities in each stage. In the CNN-based encoder branch, a triple down-sampling module (TDsM) has been proposed to balance the performance while downsampling. In the final stage of the encoder, the feature maps acquired from two branches are concatenated as input to the decoder, which is constrained by MScAM outputs. The proposed method has been evaluated on datasets from the MICCAI BraTS2021 Challenge. The results of the experiments demonstrate that the method algorithm can precisely segment brain tumors, especially the portions within tumors. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
134. 3D Kronecker Convolutional Feature Pyramid for Brain Tumor Semantic Segmentation in MR Imaging.
- Author
-
Nazir, Kainat, Madni, Tahir Mustafa, Janjua, Uzair Iqba, Javed, Umer, Khan, Muhammad Attique, Tariq, Usman, and Jae-Hyuk Cha
- Subjects
BRAIN tumors ,MAGNETIC resonance imaging ,CANCER diagnosis ,PYRAMIDS ,FEATURE selection ,IMAGE segmentation ,MULTIMODAL user interfaces - Abstract
Brain tumor significantly impacts the quality of life and changes everything for a patient and their loved ones. Diagnosing a brain tumor usually begins with magnetic resonance imaging (MRI). The manual brain tumor diagnosis from the MRO images always requires an expert radiologist. However, this process is time-consuming and costly. Therefore, a computerized technique is required for brain tumor detection in MRI images. Using the MRI, a novel mechanism of the three-dimensional (3D) Kronecker convolution feature pyramid (KCFP) is used to segment brain tumors, resolving the pixel loss and weak processing of multi-scale lesions. A single dilation rate was replaced with the 3D Kronecker convolution, while local feature learning was performed using the 3D Feature Selection (3DFSC). A 3D KCFP was added at the end of 3DFSC to resolve weak processing of multi-scale lesions, yielding efficient segmentation of brain tumors of different sizes. A 3D connected component analysis with a global threshold was used as a post-processing technique. The standard Multimodal Brain Tumor Segmentation 2020 dataset was used for model validation. Our 3D KCFP model performed exceptionally well compared to other benchmark schemes with a dice similarity coefficient of 0.90, 0.80, and 0.84 for the whole tumor, enhancing tumor, and tumor core, respectively. Overall, the proposed model was efficient in brain tumor segmentation, which may facilitate medical practitioners for an appropriate diagnosis for future treatment planning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
135. Brain tumor segmentation by auxiliary classifier generative adversarial network.
- Author
-
Kiani Kalejahi, Behnam, Meshgini, Saeed, and Danishvar, Sebelan
- Abstract
Recently, great progress has been achieved in the building of automatic segmentation and classification systems for use in medical applications utilizing machine learning techniques. These systems have been used to analyze medical images. However, the performance of some of these systems typically decreases when utilizing fresh data. This may be because different data were used for training, which may be because of changes in protocols or imaging equipment, or it may be because of a combination of these factors. In the field of medical imaging, one of the most difficult and important goals is to produce an image that is really medical yet is otherwise wholly distinct from the original images. The fake images that are produced as a consequence boost diagnostic accuracy and make it possible to identify more data, both with the help of computers and in the training of medical professionals. These issues are mostly brought on by low-contrast MR images, particularly in the anatomical regions of the brain, as well as shifts in sequence. Within the scope of this study, we investigate the possibility of producing multiple-sequence MR images by the application of auxiliary classifier-generating adversarial networks (ACGANs). In addition to that, a brand new approach to in-depth learning for tumor segmentation in MR images is provided. In the beginning, a deep neural network is trained to function as a discriminator in GAN data sets consisting of magnetic resonance (MR) images in order to extract the features and also learn the structure of the MR images in their annular layers. After that, the layers that are already fully connected are removed, and the entire deep network is instructed in segmentation for the purpose of diagnosing tumors. The proposed AC-GAN method provides an overall accuracy of 94% on the BraTs2019 database using Adam optimization with a batch size of 30. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
136. Tumor delineation from 3-D MR brain images.
- Author
-
Roy, Shaswati and Maji, Pradipta
- Abstract
In cancer research, automatic brain tumor detection from 3-D magnetic resonance (MR) images is an important pre-requisite. In this regard, the paper presents a new method for segmentation of brain tumor from MR volumes corrupted with different imaging artifact, such as bias field and noise. It addresses the problems of uncertainty and bias field artifact of brain MR images by means of a segmentation algorithm, termed as CoLoRS (Coherent Local Intensity Rough Segmentation). However, the lack of knowledge about the tumor intensity and the textural properties around tumor surface makes the exclusion or inclusion of tumor or healthy tissues, respectively, in the tumor region obtained by CoLoRS. Therefore, a post-processing technique is introduced for precise delineation of tumor region from healthy brain tissues. It mingles the benefits of morphological operations and theory of rough sets into the region growing approach to improve the result of tumor detection. Several publicly available MR brain tumor data sets, namely, BRATS 12, BRATS 14 and BRATS 19, are used to demonstrate the effectiveness of the proposed method with respect to existing approaches. For real high-grade and low-grade data sets of BRATS-12-14, the Dice coefficient of the proposed algorithm is 0.783443 and 0.787045, respectively, and for BRATS-19, 0.788849 and 0.808582, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
137. A new ensemble method for brain tumor segmentation
- Author
-
Laouali, Souleymane Mahaman, Chebbah, Mouna, and Nakouri, Haïfa
- Published
- 2024
- Full Text
- View/download PDF
138. Multimodal MRI brain tumor segmentation using 3D attention UNet with dense encoder blocks and residual decoder blocks
- Author
-
Tassew, Tewodros, Ashamo, Betelihem Asfaw, and Nie, Xuan
- Published
- 2024
- Full Text
- View/download PDF
139. MRI Brain tumor segmentation and classification with improved U-Net model
- Author
-
Kusuma, Palleti Venkata and Reddy, S. Chandra Mohan
- Published
- 2024
- Full Text
- View/download PDF
140. Applications of Deep Neural Networks with Fractal Structure and Attention Blocks for 2D and 3D Brain Tumor Segmentation
- Author
-
Cheng, Kaiming, Shen, Yueyang, and Dinov, Ivo D.
- Published
- 2024
- Full Text
- View/download PDF
141. An improved U-shaped network for brain tumor segmentation
- Author
-
Jing-teng HUANG, Qiang LI, and Xin GUAN
- Subjects
brain tumor segmentation ,3d u-net ,skip connection ,inverted residuals ,magnetic resonance image ,multi-modal fusion ,Mining engineering. Metallurgy ,TN1-997 ,Environmental engineering ,TA170-171 - Abstract
Accurate segmentation of brain tumors from magnetic resonance images is the key to the clinical diagnosis and rational treatment of brain tumor diseases. Recently, convolutional neural networks have been widely used in biomedical image processing. 3D U-Net is sought after because of its excellent segmentation effect; however, the feature map supplemented by the skip connection is the output feature map after the encoder feature extraction, and the loss of original detail information in this process is ignored. In the 3D U-Net design, after each layer of convolution, regularization, and activation function processing, the detailed information contained in the feature map will deviate from the original detailed information. For skip connections, the essence of this design is to supplement the detailed information of the original features to the decoder; that is, in the decoder stage, the more original the skip connection-supplemented feature maps are, the more easily the decoder can achieve a better segmentation effect. To address this problem, this paper proposes the concept of a front-skip connection. That is, the starting point of the skip connection is adjusted to the front to improve the network performance. On the basis of this idea, we design a front-skip connection inverted residual U-shaped network (FS Inv-Res U-Net). First, the front-skip connections are applied to three typical networks, DMF Net, HDC Net, and 3D U-Net, to verify their effectiveness and generalization. Applying our proposed front-skip connection concept on these three networks improves the network performance, indicating that the idea of a front-skip connection is simple but powerful and has out-of-the-box characteristics. Second, 3D U-Net is enhanced using the front-skip connection concept and the inverted residual structure of MobileNet, and then FS Inv-Res U-Net is proposed based on these two ideas. Additionally, ablation experiments are conducted on FS Inv-Res U-Net. After adding the front-skip connection and the inverted residual module to the backbone network 3D U-Net, the segmentation performance of the proposed network is greatly improved, indicating that the front-skip connection and the inverted residual module help our brain tumor segmentation network. Finally, the proposed network is validated on the validation dataset of the public datasets BraTS 2018 and BraTS 2019. The Dice scores of the validation results on the enhanced tumor, whole tumor, and tumor core were 80.23%, 90.30%, and 85.45% and 78.38%, 89.78%, and 83.01%, respectively; the hausdorff95 distances were 2.35, 4.77, and 5.50 mm and 4, 5.57, and 6.37 mm, respectively. The above results show that the FS Inv-Res U-Net proposed in this paper achieves the same evaluation indicators as advanced networks and provides accurate brain tumor segmentations.
- Published
- 2023
- Full Text
- View/download PDF
142. GETNet: Group Normalization Shuffle and Enhanced Channel Self-Attention Network Based on VT-UNet for Brain Tumor Segmentation
- Author
-
Bin Guo, Ning Cao, Ruihao Zhang, and Peng Yang
- Subjects
brain tumor segmentation ,MRI ,medical image ,deep learning ,Transformer ,Medicine (General) ,R5-920 - Abstract
Currently, brain tumors are extremely harmful and prevalent. Deep learning technologies, including CNNs, UNet, and Transformer, have been applied in brain tumor segmentation for many years and have achieved some success. However, traditional CNNs and UNet capture insufficient global information, and Transformer cannot provide sufficient local information. Fusing the global information from Transformer with the local information of convolutions is an important step toward improving brain tumor segmentation. We propose the Group Normalization Shuffle and Enhanced Channel Self-Attention Network (GETNet), a network combining the pure Transformer structure with convolution operations based on VT-UNet, which considers both global and local information. The network includes the proposed group normalization shuffle block (GNS) and enhanced channel self-attention block (ECSA). The GNS is used after the VT Encoder Block and before the downsampling block to improve information extraction. An ECSA module is added to the bottleneck layer to utilize the characteristics of the detailed features in the bottom layer effectively. We also conducted experiments on the BraTS2021 dataset to demonstrate the performance of our network. The Dice coefficient (Dice) score results show that the values for the regions of the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) were 91.77, 86.03, and 83.64, respectively. The results show that the proposed model achieves state-of-the-art performance compared with more than eleven benchmarks.
- Published
- 2024
- Full Text
- View/download PDF
143. Estimation of Fractal Dimension and Segmentation of Brain Tumor with Parallel Features Aggregation Network
- Author
-
Haseeb Sultan, Nadeem Ullah, Jin Seong Hong, Seung Gu Kim, Dong Chan Lee, Seung Yong Jung, and Kang Ryoung Park
- Subjects
brain tumor segmentation ,feature aggregation ,fractal dimension ,enhancing tumor ,tumor core ,whole tumor ,Thermodynamics ,QC310.15-319 ,Mathematics ,QA1-939 ,Analysis ,QA299.6-433 - Abstract
The accurate recognition of a brain tumor (BT) is crucial for accurate diagnosis, intervention planning, and the evaluation of post-intervention outcomes. Conventional methods of manually identifying and delineating BTs are inefficient, prone to error, and time-consuming. Subjective methods for BT recognition are biased because of the diffuse and irregular nature of BTs, along with varying enhancement patterns and the coexistence of different tumor components. Hence, the development of an automated diagnostic system for BTs is vital for mitigating subjective bias and achieving speedy and effective BT segmentation. Recently developed deep learning (DL)-based methods have replaced subjective methods; however, these DL-based methods still have a low performance, showing room for improvement, and are limited to heterogeneous dataset analysis. Herein, we propose a DL-based parallel features aggregation network (PFA-Net) for the robust segmentation of three different regions in a BT scan, and we perform a heterogeneous dataset analysis to validate its generality. The parallel features aggregation (PFA) module exploits the local radiomic contextual spatial features of BTs at low, intermediate, and high levels for different types of tumors and aggregates them in a parallel fashion. To enhance the diagnostic capabilities of the proposed segmentation framework, we introduced the fractal dimension estimation into our system, seamlessly combined as an end-to-end task to gain insights into the complexity and irregularity of structures, thereby characterizing the intricate morphology of BTs. The proposed PFA-Net achieves the Dice scores (DSs) of 87.54%, 93.42%, and 91.02%, for the enhancing tumor region, whole tumor region, and tumor core region, respectively, with the multimodal brain tumor segmentation (BraTS)-2020 open database, surpassing the performance of existing state-of-the-art methods. Additionally, PFA-Net is validated with another open database of brain tumor progression and achieves a DS of 64.58% for heterogeneous dataset analysis, surpassing the performance of existing state-of-the-art methods.
- Published
- 2024
- Full Text
- View/download PDF
144. SARFNet: Selective Layer and Axial Receptive Field Network for Multimodal Brain Tumor Segmentation
- Author
-
Bin Guo, Ning Cao, Peng Yang, and Ruihao Zhang
- Subjects
brain tumor segmentation ,MRI ,medical image ,deep learning ,UNet ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Efficient magnetic resonance imaging (MRI) segmentation, which is helpful for treatment planning, is essential for identifying brain tumors from detailed images. In recent years, various convolutional neural network (CNN) structures have been introduced for brain tumor segmentation tasks and have performed well. However, the downsampling blocks of most existing methods are typically used only for processing the variation in image sizes and lack sufficient capacity for further extraction features. We, therefore, propose SARFNet, a method based on UNet architecture, which consists of the proposed SLiRF module and advanced AAM module. The SLiRF downsampling module can extract feature information and prevent the loss of important information while reducing the image size. The AAM block, incorporated into the bottleneck layer, captures more contextual information. The Channel Attention Module (CAM) is introduced into skip connections to enhance the connections between channel features to improve accuracy and produce better feature expression. Ultimately, deep supervision is utilized in the decoder layer to avoid vanishing gradients and generate better feature representations. Many experiments were performed to validate the effectiveness of our model on the BraTS2018 dataset. SARFNet achieved Dice coefficient scores of 90.40, 85.54, and 82.15 for the whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively. The results show that the proposed model achieves state-of-the-art performance compared with twelve or more benchmarks.
- Published
- 2024
- Full Text
- View/download PDF
145. Deep learning-based magnetic resonance image segmentation technique for application to glioma
- Author
-
Bing Wan, Bingbing Hu, Ming Zhao, Kang Li, and Xu Ye
- Subjects
deep learning ,brain tumor segmentation ,magnetic resonance imaging ,loss function ,medical image ,Medicine (General) ,R5-920 - Abstract
IntroductionBrain glioma segmentation is a critical task for medical diagnosis, monitoring, and treatment planning.DiscussionAlthough deep learning-based fully convolutional neural networks have shown promising results in this field, their unstable segmentation quality remains a major concern. Moreover, they do not consider the unique genomic and basic data of brain glioma patients, which may lead to inaccurate diagnosis and treatment planning.MethodsThis study proposes a new model that overcomes this problem by improving the overall architecture and incorporating an innovative loss function. First, we employed DeepLabv3+ as the overall architecture of the model and RegNet as the image encoder. We designed an attribute encoder module to incorporate the patient’s genomic and basic data and the image depth information into a 2D convolutional neural network, which was combined with the image encoder and atrous spatial pyramid pooling module to form the encoder module for addressing the multimodal fusion problem. In addition, the cross-entropy loss and Dice loss are implemented with linear weighting to solve the problem of sample imbalance. An innovative loss function is proposed to suppress specific size regions, thereby preventing the occurrence of segmentation errors of noise-like regions; hence, higher-stability segmentation results are obtained. Experiments were conducted on the Lower-Grade Glioma Segmentation Dataset, a widely used benchmark dataset for brain tumor segmentation.ResultsThe proposed method achieved a Dice score of 94.36 and an intersection over union score of 91.83, thus outperforming other popular models.
- Published
- 2023
- Full Text
- View/download PDF
146. A deep learning approach for multi‐stage classification of brain tumor through magnetic resonance images.
- Author
-
Gull, Sahar, Akbar, Shahzad, and Naqi, Syed Muhammad
- Subjects
- *
DEEP learning , *BRAIN tumors , *MAGNETIC resonance imaging , *CONVOLUTIONAL neural networks , *TUMOR classification , *SIGNAL-to-noise ratio - Abstract
Brain tumor is the 10th major cause of death among humans. The detection of brain tumor is a significant process in the medical field. Therefore, the objective of this research work is to propose a fully automated deep learning framework for multistage classification. Besides, this study focuses on to develop an efficient and reliable system using a convolutional neural network (CNN). In this study, the fast bounding box technique is used for segmentation. Moreover, the CNN layers‐based three models are developed for multistage classification through magnetic resonance images on three publicly available datasets. The first dataset is obtained from Kaggle Repository (Dataset‐1), the second dataset is known as Figshare (Dataset‐2), and the third dataset is called REMBRANDT (Dataset‐3) to classify the MR images into different grades. Different augmentation techniques are applied to increase the data size of MR images. In pre‐processing, the proposed models achieved higher Peak Signal‐to‐Noise ratio to remove noise. The first proposed deep CNN framework mentioned as Classification‐1 has obtained 99.40% accuracy, which classified MR images into two classes, that is (i) normal and (ii) abnormal, while the second proposed CNN framework mentioned as Classification‐2 has obtained 97.78% accuracy, which classified brain tumor into three types, which are meningioma, glioma, and pituitary. Similarly, the third developed CNN framework mentioned as Classification‐3 has obtained 98.91% accuracy that further classified MR images of tumors into four different classes as: Grade I, Grade II, Grade III, and Grade IV. The results demonstrate that the proposed models achieved better performance on three large and diverse datasets. The comparison of obtained outcomes shows that the developed models are more efficient and effective than state‐of‐the‐art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
147. Multi-view brain tumor segmentation (MVBTS): An ensemble of planar and triplanar attention UNets.
- Author
-
RAJPUT, Snehal, KAPDI, Rupal A., RAVAL, Mehul S., and ROY, Mohendra
- Subjects
- *
BRAIN tumors , *RESOURCE-limited settings , *ANATOMICAL planes - Abstract
3D UNet has achieved high brain tumor segmentation performance but requires high computation, large memory, abundant training data, and has limited interpretability. As an alternative, the paper explores using 2D triplanar (2.5D) processing, which allows images to be examined individually along axial, sagittal, and coronal planes or together. The individual plane captures spatial relationships, and combined planes capture contextual (depth) information. The paper proposes and analyzes an ensemble of uniplanar and triplanar UNets combined with channel and spatial attention for brain tumor segmentation. It investigates the significance of each plane and analyzes the impact of uniplanar and triplanar ensembles with attention to segmentation. We tested the performance of these variants on the BraTS2020 training and validation datasets. The best dice similarity coefficients for enhancing tumor, whole tumor, and tumor core over the training set are 0.712, 0.897, and 0.837, while they are 0.699, 0.875, and 0.782, over the validation set, respectively (obtained through BraTS model evaluation platform). The scores are at par with the leading 2D and 3D BraTS models. Therefore, the proposed approach with fewer parameters (almost 3× less) demonstrates comparable performance to that of a 3D model, making it suitable for brain tumor segmentation in resource-limited settings. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
148. 融合注意力机制的多模态脑肿瘤 MR 图像分割.
- Author
-
毋小省, 杨奇鸿, 唐朝生, and 孙君顶
- Abstract
Copyright of Journal of Computer-Aided Design & Computer Graphics / Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao is the property of Gai Kan Bian Wei Hui and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
149. Bridged-U-Net-ASPP-EVO and Deep Learning Optimization for Brain Tumor Segmentation.
- Author
-
Yousef, Rammah, Khan, Shakir, Gupta, Gaurav, Albahlal, Bader M., Alajlan, Saad Abdullah, and Ali, Aleem
- Subjects
- *
BRAIN tumors , *DEEP learning , *MAGNETIC resonance imaging - Abstract
Brain tumor segmentation from Magnetic Resonance Images (MRI) is considered a big challenge due to the complexity of brain tumor tissues, and segmenting these tissues from the healthy tissues is an even more tedious challenge when manual segmentation is undertaken by radiologists. In this paper, we have presented an experimental approach to emphasize the impact and effectiveness of deep learning elements like optimizers and loss functions towards a deep learning optimal solution for brain tumor segmentation. We evaluated our performance results on the most popular brain tumor datasets (MICCAI BraTS 2020 and RSNA-ASNR-MICCAI BraTS 2021). Furthermore, a new Bridged U-Net-ASPP-EVO was introduced that exploits Atrous Spatial Pyramid Pooling to enhance capturing multi-scale information to help in segmenting different tumor sizes, Evolving Normalization layers, squeeze and excitation residual blocks, and the max-average pooling for down sampling. Two variants of this architecture were constructed (Bridged U-Net_ASPP_EVO v1 and Bridged U-Net_ASPP_EVO v2). The best results were achieved using these two models when compared with other state-of-the-art models; we have achieved average segmentation dice scores of 0.84, 0.85, and 0.91 from variant1, and 0.83, 0.86, and 0.92 from v2 for the Enhanced Tumor (ET), Tumor Core (TC), and Whole Tumor (WT) tumor sub-regions, respectively, in the BraTS 2021validation dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
150. Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss.
- Author
-
Jiang, Yang, Zhang, Shuang, and Chi, Jianning
- Subjects
DIGITAL image processing ,DEEP learning ,MAGNETIC resonance imaging ,BRAIN tumors ,DIAGNOSTIC imaging ,ARTIFICIAL neural networks ,ALGORITHMS - Abstract
Multi-modal brain magnetic resonance imaging (MRI) data has been widely applied in vison-based brain tumor segmentation methods due to its complementary diagnostic information from different modalities. Since the multi-modal image data is likely to be corrupted by noise or artifacts during the practical scanning process, making it difficult to build a universal model for the subsequent segmentation and diagnosis with incomplete input data, image completion has become one of the most attractive fields in the medical image pre-processing. It can not only assist clinicians to observe the patient's lesion area more intuitively and comprehensively, but also realize the desire to save costs for patients and reduce the psychological pressure of patients during tedious pathological examinations. Recently, many deep learning-based methods have been proposed to complement the multi-modal image data and provided good performance. However, current methods cannot fully reflect the continuous semantic information between the adjacent slices and the structural information of the intra-slice features, resulting in limited complementation effects and efficiencies. To solve these problems, in this work, we propose a novel generative adversarial network (GAN) framework, named as random generative adversarial network (RAGAN), to complete the missing T1, T1ce, and FLAIR data from the given T2 modal data in real brain MRI, which consists of the following parts: (1) For the generator, we use T2 modal images and multi-modal classification labels from the same sample for cyclically supervised training of image generation, so as to realize the restoration of arbitrary modal images. (2) For the discriminator, a multi-branch network is proposed where the primary branch is designed to judge whether the certain generated modal image is similar to the target modal image, while the auxiliary branch is to judge whether its essential visual features are similar to those of the target modal image. We conduct qualitative and quantitative experimental validations on the BraTs2018 dataset, generating 10,686 MRI data in each missing modality. Real brain tumor morphology images were compared with synthetic brain tumor morphology images using PSNR and SSIM as evaluation metrics. Experiments demonstrate that the brightness, resolution, location, and morphology of brain tissue under different modalities are well reconstructed. Meanwhile, we also use the segmentation network as a further validation experiment. Blend synthetic and real images into a segmentation network. Our segmentation network adopts the classic segmentation network UNet. The segmentation result is 77.58%. In order to prove the value of our proposed method, we use the better segmentation network RES_UNet with depth supervision as the segmentation model, and the segmentation accuracy rate is 88.76%. Although our method does not significantly outperform other algorithms, the DICE value is 2% higher than the current state-of-the-art data completion algorithm TC-MGAN. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.