1,576 results on '"Brain tumor segmentation"'
Search Results
2. DCRUNet++: A Depthwise Convolutional Residual UNet++ Model for Brain Tumor Segmentation
- Author
-
Sonawane, Yash, Kolekar, Maheshkumar H., Yadav, Agnesh Chandra, Kadam, Gargi, Tiwarekar, Sanika, Kalbande, Dhananjay R., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Antonacopoulos, Apostolos, editor, Chaudhuri, Subhasis, editor, Chellappa, Rama, editor, Liu, Cheng-Lin, editor, Bhattacharya, Saumik, editor, and Pal, Umapada, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Segmentation of Brain Tumor Parts from Multi-spectral MRI Records Using Deep Learning and U-Net Architecture
- Author
-
Csaholczi, Szabolcs, Györfi, Ágnes, Kovács, Levente, Szilágyi, László, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Hernández-García, Ruber, editor, Barrientos, Ricardo J., editor, and Velastin, Sergio A., editor
- Published
- 2025
- Full Text
- View/download PDF
4. SCC-CAM: Weakly Supervised Segmentation on Brain Tumor MRI with Similarity Constraint and Causality
- Author
-
Jiao, Panpan, Tian, Zhiqiang, Chen, Zhang, Guo, Xuejian, Chen, Zhi, Dou, Liang, Du, Shaoyi, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lin, Zhouchen, editor, Cheng, Ming-Ming, editor, He, Ran, editor, Ubul, Kurban, editor, Silamu, Wushouer, editor, Zha, Hongbin, editor, Zhou, Jie, editor, and Liu, Cheng-Lin, editor
- Published
- 2025
- Full Text
- View/download PDF
5. Residual learning for brain tumor segmentation: dual residual blocks approach.
- Author
-
Verma, Akash and Yadav, Arun Kumar
- Subjects
- *
CONVOLUTIONAL neural networks , *BRAIN tumors , *DEEP learning , *MAGNETIC resonance imaging , *GLIOMAS - Abstract
The most common type of malignant brain tumor, gliomas, has a variety of grades that significantly impact a patient's chance of survival. Accurate segmentation of brain tumor regions from MRI images is crucial for enhancing diagnostic precision and refining surgical strategies. This task is particularly challenging due to the diverse sizes and shapes of tumors, as well as the intricate nature of MRI data. Mastering this segmentation process is essential for improving clinical outcomes and ensuring optimal treatment planning. In this research, we provide a UNet-based model (RR-UNet) designed specifically for brain tumor segmentation, which uses small and diverse datasets containing human-annotated ground truth segmentations. This model uses residual learning to improve segmentation results over the original UNet architecture, as shown by higher dice similarity coefficient (DSC) and Intersection over Union (IoU) scores. Residual blocks enable a deeper network, which can capture complex patterns. Residual blocks reuse features, allowing the network to learn more abstract and informative representations from input images. Through comprehensive evaluation and validation, we illustrate our method's efficacy and generalization capabilities, emphasizing its potential for real-world clinical applications. This segmentation model predicts DSC of 98.18% and accuracy of 99.78% in tumor segmentation using Figshare LGG (Low-grade glioma) FLAIR segmentation dataset and DSC of 98.54% & accuracy of 99.81% using BraTS 2020 dataset. The ablation study shows the importance of the model's residual mechanism. Overall, the proposed approach outperforms or compares to existing most recent algorithms in brain tumor segmentation tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Implementation of innovative approach for detecting brain tumors in magnetic resonance imaging using NeuroFusionNet model.
- Author
-
Kotte, Arpitha and Ahmad, Syed Shabbeer
- Subjects
BRAIN tumors ,MAGNETIC resonance imaging ,IMAGE processing ,TUMOR classification ,DIAGNOSTIC imaging ,DEEP learning - Abstract
The goal of this study is to create a strong system that can quickly detect and precisely classify brain tumors, which is essential for improving treatment results. The study uses advanced image processing techniques and the NeuroFusionNet deep learning model to accurately segment data from the brain tumor segmentation (BRATS) dataset, presenting a detailed methodology. The objective is to create a high-precision system that surpasses current methods in key performance metrics. NeuroFusionNet demonstrates outstanding accuracy of 99.21%, as well as impressive specificity and sensitivity rates of 99.17% and 99.383%, respectively, exceeding previous benchmarks. The findings emphasize the system's ability to greatly enhance the diagnostic process, enabling early intervention and ultimately improving patient care in brain tumor detection and classification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Segmentation of glioblastomas via 3D FusionNet.
- Author
-
Guo, Xiangyu, Zhang, Botao, Peng, Yue, Chen, Feng, and Li, Wenbin
- Subjects
BRAIN tumors ,DEEP learning ,DATA augmentation ,MAGNETIC resonance imaging ,SAMPLE size (Statistics) - Abstract
Introduction: This study presented an end-to-end 3D deep learning model for the automatic segmentation of brain tumors. Methods: The MRI data used in this study were obtained from a cohort of 630 GBM patients from the University of Pennsylvania Health System (UPENN-GBM). Data augmentation techniques such as flip and rotations were employed to further increase the sample size of the training set. The segmentation performance of models was evaluated by recall, precision, dice score, Lesion False Positive Rate (LFPR), Average Volume Difference (AVD) and Average Symmetric Surface Distance (ASSD). Results: When applying FLAIR, T1, ceT1, and T2 MRI modalities, FusionNet-A and FusionNet-C the best-performing model overall, with FusionNet-A particularly excelling in the enhancing tumor areas, while FusionNet-C demonstrates strong performance in the necrotic core and peritumoral edema regions. FusionNet-A excels in the enhancing tumor areas across all metrics (0.75 for recall, 0.83 for precision and 0.74 for dice scores) and also performs well in the peritumoral edema regions (0.77 for recall, 0.77 for precision and 0.75 for dice scores). Combinations including FLAIR and ceT1 tend to have better segmentation performance, especially for necrotic core regions. Using only FLAIR achieves a recall of 0.73 for peritumoral edema regions. Visualization results also indicate that our model generally achieves segmentation results similar to the ground truth. Discussion: FusionNet combines the benefits of U-Net and SegNet, outperforming the tumor segmentation performance of both. Although our model effectively segments brain tumors with competitive accuracy, we plan to extend the framework to achieve even better segmentation performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. DTASUnet: a local and global dual transformer with the attention supervision U-network for brain tumor segmentation.
- Author
-
Ma, Bo, Sun, Qian, Ma, Ze, Li, Baosheng, Cao, Qiang, Wang, Yungang, and Yu, Gang
- Subjects
- *
BRAIN tumors , *MAGNETIC resonance imaging , *TRANSFORMER models , *DEEP learning , *BRAIN surgery - Abstract
Glioma refers to a highly prevalent type of brain tumor that is strongly associated with a high mortality rate. During the treatment process of the disease, it is particularly important to accurately perform segmentation of the glioma from Magnetic Resonance Imaging (MRI). However, existing methods used for glioma segmentation usually rely solely on either local or global features and perform poorly in terms of capturing and exploiting critical information from tumor volume features. Herein, we propose a local and global dual transformer with an attentional supervision U-shape network called DTASUnet, which is purposed for glioma segmentation. First, we built a pyramid hierarchical encoder based on 3D shift local and global transformers to effectively extract the features and relationships of different tumor regions. We also designed a 3D channel and spatial attention supervision module to guide the network, allowing it to capture key information in volumetric features more accurately during the training process. In the BraTS 2018 validation set, the average Dice scores of DTASUnet for the tumor core (TC), whole tumor (WT), and enhancing tumor (ET) regions were 0.845, 0.905, and 0.808, respectively. These results demonstrate that DTASUnet has utility in assisting clinicians with determining the location of gliomas to facilitate more efficient and accurate brain surgery and diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Multimodal Connectivity‐Guided Glioma Segmentation From Magnetic Resonance Images via Cascaded 3D Residual U‐Net.
- Author
-
Sun, Xiaoyan, Hu, Chuhan, He, Wenhan, Yuan, Zhenming, and Zhang, Jian
- Subjects
- *
ARTIFICIAL neural networks , *MAGNETIC resonance imaging , *BRAIN tumors , *GLIOMAS , *DEATH rate - Abstract
Glioma is a type of brain tumor with a high mortality rate. Magnetic resonance imaging (MRI) is commonly used for examination, and the accurate segmentation of tumor regions from MR images is essential to computer‐aided diagnosis. However, due to the intrinsic heterogeneity of brain glioma, precise segmentation is very challenging, especially for tumor subregions. This article proposed a two‐stage cascaded method for brain tumor segmentation that considers the hierarchical structure of the target tumor subregions. The first stage aims to identify the whole tumor (WT) from the background area; and the second stage aims to achieve fine‐grained segmentation of the subregions, including enhanced tumor (ET) region and tumor core (TC) region. Both stages apply a deep neural network structure combining modified 3D U‐Net with a residual connection scheme to tumor region and subregion segmentation. Moreover, in the training phase, the 3D masks generation of subregions with potential incomplete connectivity are guided by the completely connected regions. Experiments were performed to evaluate the performance of the methods on both area and boundary accuracy. The average dice score of the WT, TC, and ET regions on BraTS 2020 dataset is 0.9168, 0.0.8992, 0.8489, and the Hausdorff distance is 6.021, 9.203, 12.171, respectively. The proposed method outperforms current works, especially in segmenting fine‐grained tumor subregions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Brain tumor segmentation by combining MultiEncoder UNet with wavelet fusion.
- Author
-
Pan, Yuheng, Yong, Haohan, Lu, Weijia, Li, Guoyan, and Cong, Jia
- Subjects
ARTIFICIAL neural networks ,BRAIN tumors ,MAGNETIC resonance imaging ,TRANSFORMER models ,SURGICAL diagnosis - Abstract
Background and objective: Accurate segmentation of brain tumors from multimodal magnetic resonance imaging (MRI) holds significant importance in clinical diagnosis and surgical intervention, while current deep learning methods cope with situations of multimodal MRI by an early fusion strategy that implicitly assumes that the modal relationships are linear, which tends to ignore the complementary information between modalities, negatively impacting the model's performance. Meanwhile, long‐range relationships between voxels cannot be captured due to the localized character of the convolution procedure. Method: Aiming at this problem, we propose a multimodal segmentation network based on a late fusion strategy that employs multiple encoders and a decoder for the segmentation of brain tumors. Each encoder is specialized for processing distinct modalities. Notably, our framework includes a feature fusion module based on a 3D discrete wavelet transform aimed at extracting complementary features among the encoders. Additionally, a 3D global context‐aware module was introduced to capture the long‐range dependencies of tumor voxels at a high level of features. The decoder combines fused and global features to enhance the network's segmentation performance. Result: Our proposed model is experimented on the publicly available BraTS2018 and BraTS2021 datasets. The experimental results show competitiveness with state‐of‐the‐art methods. Conclusion: The results demonstrate that our approach applies a novel concept for multimodal fusion within deep neural networks and delivers more accurate and promising brain tumor segmentation, with the potential to assist physicians in diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Diffusion network with spatial channel attention infusion and frequency spatial attention for brain tumor segmentation.
- Author
-
Mi, Jiaqi and Zhang, Xindong
- Subjects
- *
DISTRIBUTION (Probability theory) , *BRAIN tumors , *MAGNETIC resonance imaging , *DEEP learning , *FEATURE extraction , *IMAGE segmentation - Abstract
Background Purpose Methods Results Conclusions Accurate segmentation of gliomas is crucial for diagnosis, treatment planning, and evaluating therapeutic efficacy. Physicians typically analyze and delineate tumor regions in brain magnetic resonance imaging (MRI) images based on personal experience, which is often time‐consuming and subject to individual interpretation. Despite advancements in deep learning technology for image segmentation, current techniques still face challenges in clearly defining tumor boundary contours and enhancing segmentation accuracy.To address these issues, this paper proposes a conditional diffusion network (SF‐Diff) with a spatial channel attention infusion (SCAI) module and a frequency spatial attention (FSA) mechanism to achieve accurate segmentation of the whole tumor (WT) region in brain tumors.SF‐Diff initially extracts multiscale information from multimodal MRI images and subsequently employs a diffusion model to restore boundaries and details, thereby enabling accurate brain tumor segmentation (BraTS). Specifically, a SCAI module is developed to capture multiscale information within and between encoder layers. A dual‐channel upsampling block (DUB) is designed to assist in detail recovery during upsampling. A FSA mechanism is introduced to better match the conditional features with the diffusion probability distribution information. Furthermore, a cross‐model loss function has been implemented to supervise the feature extraction of the conditional model and the noise distribution of the diffusion model.The dataset used in this paper is publicly available and includes 369 patient cases from the Multimodal BraTS Challenge 2020 (BraTS2020). The conducted experiments on BraTS2020 demonstrate that SF‐Diff performs better than other state‐of‐the‐art models. The method achieved a Dice score of 91.87%, a Hausdorff 95 of 5.47 mm, an IoU of 84.96%, a sensitivity of 92.29%, and a specificity of 99.95% on BraTS2020.The proposed SF‐Diff performs well in identifying the WT region of the brain tumors compared to other state‐of‐the‐art models, especially in terms of boundary contours and non‐contiguous lesion regions, which is clinically significant. In the future, we will further develop this method for brain tumor three‐class segmentation task. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Mixture-of-experts and semantic-guided network for brain tumor segmentation with missing MRI modalities.
- Author
-
Liu, Siyu, Wang, Haoran, Li, Shiman, and Zhang, Chenxi
- Abstract
Accurate brain tumor segmentation with multi-modal MRI images is crucial, but missing modalities in clinical practice often reduce accuracy. The aim of this study is to propose a mixture-of-experts and semantic-guided network to tackle the issue of missing modalities in brain tumor segmentation. We introduce a transformer-based encoder with novel mixture-of-experts blocks. In each block, four modality experts aim for modality-specific feature learning. Learnable modality embeddings are employed to alleviate the negative effect of missing modalities. We also introduce a decoder guided by semantic information, designed to pay higher attention to various tumor regions. Finally, we conduct extensive comparison experiments with other models as well as ablation experiments to validate the performance of the proposed model on the BraTS2018 dataset. The proposed model can accurately segment brain tumor sub-regions even with missing modalities. It achieves an average Dice score of 0.81 for the whole tumor, 0.66 for the tumor core, and 0.52 for the enhanced tumor across the 15 modality combinations, achieving top or near-top results in most cases, while also exhibiting a lower computational cost. Our mixture-of-experts and sematic-guided network achieves accurate and reliable brain tumor segmentation results with missing modalities, indicating its significant potential for clinical applications. Our source code is already available at https://github.com/MaggieLSY/MESG-Net. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Active Learning in Brain Tumor Segmentation with Uncertainty Sampling and Annotation Redundancy Restriction.
- Author
-
Kim, Daniel D, Chandra, Rajat S, Yang, Li, Wu, Jing, Feng, Xue, Atalay, Michael, Bettegowda, Chetan, Jones, Craig, Sair, Haris, Liao, Wei-hua, Zhu, Chengzhang, Zou, Beiji, Kazerooni, Anahita Fathi, Nabavizadeh, Ali, Jiao, Zhicheng, Peng, Jian, and Bai, Harrison X
- Abstract
Deep learning models have demonstrated great potential in medical imaging but are limited by the expensive, large volume of annotations required. To address this, we compared different active learning strategies by training models on subsets of the most informative images using real-world clinical datasets for brain tumor segmentation and proposing a framework that minimizes the data needed while maintaining performance. Then, 638 multi-institutional brain tumor magnetic resonance imaging scans were used to train three-dimensional U-net models and compare active learning strategies. Uncertainty estimation techniques including Bayesian estimation with dropout, bootstrapping, and margins sampling were compared to random query. Strategies to avoid annotating similar images were also considered. We determined the minimum data necessary to achieve performance equivalent to the model trained on the full dataset (α = 0.05). Bayesian approximation with dropout at training and testing showed results equivalent to that of the full data model (target) with around 30% of the training data needed by random query to achieve target performance (p = 0.018). Annotation redundancy restriction techniques can reduce the training data needed by random query to achieve target performance by 20%. We investigated various active learning strategies to minimize the annotation burden for three-dimensional brain tumor segmentation. Dropout uncertainty estimation achieved target performance with the least annotated data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. DTASUnet: a local and global dual transformer with the attention supervision U-network for brain tumor segmentation
- Author
-
Bo Ma, Qian Sun, Ze Ma, Baosheng Li, Qiang Cao, Yungang Wang, and Gang Yu
- Subjects
Brain tumor segmentation ,Deep learning ,Multimodal MRI ,Local and global transformer ,Attention supervision ,3D U-Net ,Medicine ,Science - Abstract
Abstract Glioma refers to a highly prevalent type of brain tumor that is strongly associated with a high mortality rate. During the treatment process of the disease, it is particularly important to accurately perform segmentation of the glioma from Magnetic Resonance Imaging (MRI). However, existing methods used for glioma segmentation usually rely solely on either local or global features and perform poorly in terms of capturing and exploiting critical information from tumor volume features. Herein, we propose a local and global dual transformer with an attentional supervision U-shape network called DTASUnet, which is purposed for glioma segmentation. First, we built a pyramid hierarchical encoder based on 3D shift local and global transformers to effectively extract the features and relationships of different tumor regions. We also designed a 3D channel and spatial attention supervision module to guide the network, allowing it to capture key information in volumetric features more accurately during the training process. In the BraTS 2018 validation set, the average Dice scores of DTASUnet for the tumor core (TC), whole tumor (WT), and enhancing tumor (ET) regions were 0.845, 0.905, and 0.808, respectively. These results demonstrate that DTASUnet has utility in assisting clinicians with determining the location of gliomas to facilitate more efficient and accurate brain surgery and diagnosis.
- Published
- 2024
- Full Text
- View/download PDF
15. Efficient deep learning algorithms for lower grade gliomas cancer MRI image segmentation: A case study
- Author
-
AmirReza BabaAhmadi and Zahra FallahPour
- Subjects
brain tumor segmentation ,lower grade gliomas ,transformers ,segformer ,mri images ,Mathematics ,QA1-939 - Abstract
This study explores the use of efficient deep learning algorithms for segmenting lower grade gliomas (LGG) in medical images. It evaluates various pre-trained atrous-convolutional architectures and U-Nets, proposing a novel transformer-based approach that surpasses traditional methods. DeepLabV3+ with MobileNetV3 backbone achieved the best results among pre-trained models, but the transformer-based approach excelled with superior segmentation accuracy and efficiency. Transfer learning significantly enhanced model performance on the LGG dataset, even with limited training samples, emphasizing the importance of selecting appropriate pre-trained models. The transformer-based method offers advantages such as efficient memory usage, better generalization, and the ability to process images of arbitrary sizes, making it suitable for clinical applications. These findings suggest that advanced deep learning techniques can improve diagnostic tools for LGG and potentially other cancers, highlighting the transformative impact of deep learning and transfer learning in medical image segmentation.
- Published
- 2025
- Full Text
- View/download PDF
16. Pocket convolution Mamba for brain tumor segmentation.
- Author
-
Zhang, Hao, Wang, Jiashu, Zhao, Yunhao, Wang, Lianjie, Zhang, Wenyin, Chen, Yeh-Cheng, and Xiong, Neal
- Abstract
In the field of brain tumor segmentation, models based on CNNs and transformer have received a lot of attention. However, CNNs have limitations in long-range modeling, and although transformers can model at long distances, they have quadratic computational complexity. Recently, state space models (SSM), exemplified by the Mamba model, can achieve linear computational complexity and are adept at long-distance interactions. In this paper, we propose pocket convolution Mamba (P-BTS), which utilizes the PocketNet paradigm, SSM, and patch contrastive learning to achieve an efficient segmentation model. Specifically, the encoder follows the PocketNet paradigm, the SSM is used at the highest level of the model encoder to capture rich semantic information, and patch contrastive learning is achieved through the results of dual-stream data. Meanwhile, we designed a spatial channel attention (SCA) module to enhance control over spatial channels, and a feature complement module (FCM) to facilitate the interaction between low-level features and high-level semantic information. We conducted comprehensive experiments on the BraTS2018 and BraTS2019 datasets, and the results show that P-BTS has excellent segmentation performance. Our code has been released at . [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
17. EFU Net: Edge Information Fused 3D Unet for Brain Tumor Segmentation
- Author
-
Y. Wang, H. Tian, and M. Liu
- Subjects
deep learning ,brain tumor segmentation ,encoder decoder structure ,edge attention mechanism ,hybrid loss function ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Brain tumors refer to abnormal cell proliferation formed in brain tissue, which can cause neurological dysfunction and cognitive impairment, posing a serious threat to human health. Therefore, it becomes a very challenging work to full-automaticly segment brain tumors using computers because of the mutual infiltration and fuzzy boundary between the focus areas and the normal brain tissue. To address the above issues, a segmentation method which integrates edge features is proposed in this paper. The overall segmentation architecture follows the encoder decoder structure, extracting rich features from the encoder. The first two layers of features are input to the edge attention module, and to extract tumor edge features which are fully fused with the features of the decoder segment. At the same time, an adaptive weighted mixed loss function is introduced to train the network by adaptively adjusting the weights of different loss parts in the training process. Relevant experiments were carried out using the public brain tumor data set. The Dice mean values of the proposed segmentation model in the whole tumor area (WT), the core tumor area (TC), and the enhancing tumor area (ET) reach 91.10%, 87.16%, and 88.86%, respectively, and the mean values of Hausdorff distance are 3.92, 5.12, and 1.92 mm, respectively. The experimental results showed that the proposed method can significantly improve segmentation accuracy, especially the segmentation effect of the edge part.
- Published
- 2024
18. Mask region-based convolutional neural network and VGG-16 inspired brain tumor segmentation
- Author
-
Niha Kamal Basha, Christo Ananth, K. Muthukumaran, Gadug Sudhamsu, Vikas Mittal, and Fikreselam Gared
- Subjects
Brain tumor segmentation ,R-CNN mask ,Transfer learning ,Inception V3 ,ResNet50 ,VGG16 ,Medicine ,Science - Abstract
Abstract The process of brain tumour segmentation entails locating the tumour precisely in images. Magnetic Resonance Imaging (MRI) is typically used by doctors to find any brain tumours or tissue abnormalities. With the use of region-based Convolutional Neural Network (R-CNN) masks, Grad-CAM and transfer learning, this work offers an effective method for the detection of brain tumours. Helping doctors make extremely accurate diagnoses is the goal. A transfer learning-based model has been suggested that offers high sensitivity and accuracy scores for brain tumour detection when segmentation is done using R-CNN masks. To train the model, the Inception V3, VGG-16, and ResNet-50 architectures were utilised. The Brain MRI Images for Brain Tumour Detection dataset was utilised to develop this method. This work's performance is evaluated and reported in terms of recall, specificity, sensitivity, accuracy, precision, and F1 score. A thorough analysis has been done comparing the proposed model operating with three distinct architectures: VGG-16, Inception V3, and Resnet-50. Comparing the proposed model, which was influenced by the VGG-16, to related works also revealed its performance. Achieving high sensitivity and accuracy percentages was the main goal. Using this approach, an accuracy and sensitivity of around 99% were obtained, which was much greater than current efforts.
- Published
- 2024
- Full Text
- View/download PDF
19. FLAIR MRI sequence synthesis using squeeze attention generative model for reliable brain tumor segmentation
- Author
-
Abdulkhalek Al-Fakih, Abdullah Shazly, Abbas Mohammed, Mohammed Elbushnaq, Kanghyun Ryu, Yeong Hyeon Gu, Mohammed A. Al-masni, and Meena M. Makary
- Subjects
Brain tumor segmentation ,MR sequence synthesis ,NnU-net ,GANs ,Multi-contrast MR ,Deep learning ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Manual segmentation of brain tumors using structural magnetic resonance imaging (MRI) is an arduous and time-consuming task. Therefore, automatic and robust segmentation will considerably influence neuro-oncological clinical trials by reducing excessive manual annotation time. Herein, we propose a deep learning model that automatically segments brain tumors even in cases of missing MRI sequences, which are common in practical clinical settings. To address this issue, we enhance a generative adversarial network (GAN) by incorporating a squeeze-and-excitation (SE) attention module into its generator and a PatchGAN into its discriminator. The SE module recalibrates channel responses by explicitly modeling interdependencies, enabling the network to focus on critical regions such as tumor areas. Our proposed generative model is optimized using a combination of adversarial, structural similarity, and mean absolute error losses to synthesize missing MRI sequences more effectively. This enhancement allows our model to synthesize the missing MRI sequence (fluid attenuated inversion recovery [FLAIR]) by leveraging information from other available sequences (T1-weighted, T2-weighted, or contrast-enhanced T1-weighted [T1ce]). For the segmentation task, we employ an optimized nnU-Net model, which is trained using existing sequences and evaluated using both available and synthesized sequences (including missing ones), mimicking real-world scenarios where often only limited MRI sequences are available or usable. Our findings reveal a notable enhancement in brain tumor segmentation, as indicated by a significant increase in overall the Dice similarity coefficient (DSC) from 0.688% (when FLAIR is missing) to 0.873% (when using synthesized FLAIR derived from T2). This improvement brings the segmentation performance closer to what was achieved when real FLAIR was available, where the DSC reaches 0.901%. Moreover, our synthesizing model was also tested on two additional datasets: the BraTS 2020 validation set and BraTS Africa 2023 training set, which produces results comparable to those of BraTS 2021, thereby proving its robustness and generalizability. In addition, the resulting tumor segmentations are subsequently employed to assess the response to treatment in cases where all sequences were available and when synthesis was employed, according to response assessment in neuro-oncology criteria.
- Published
- 2024
- Full Text
- View/download PDF
20. A Dual Cascaded Deep Theoretic Learning Approach for the Segmentation of the Brain Tumors in MRI Scans.
- Author
-
Sreedhar, Jinka, Dara, Suresh, Srinivasulu, C. H., Katari, Butchi Raju, Alkhayyat, Ahmed, Vidyarthi, Ankit, and Alsulami, Mashael M.
- Subjects
- *
MAGNETIC resonance imaging , *COMPUTER-assisted image analysis (Medicine) , *BRAIN tumors , *DIAGNOSTIC imaging , *BRAIN imaging , *DEEP learning - Abstract
Accurate segmentation of brain tumors from magnetic resonance imaging (MRI) is crucial for diagnosis, treatment planning, and monitoring of patients with neurological disorders. This paper proposes an approach for brain tumor segmentation employing a cascaded architecture integrating L‐Net and W‐Net deep learning models. The proposed cascaded model leverages the strengths of U‐Net as a baseline model to enhance the precision and robustness of the segmentation process. In the proposed framework, the L‐Net excels in capturing the mask, while the W‐Net focuses on fine‐grained features and spatial information to discern complex tumor boundaries. The cascaded configuration allows for a seamless integration of these complementary models, enhancing the overall segmentation performance. To evaluate the proposed approach, extensive experiments were conducted on the datasets of BraTs and SMS Medical College comprising multi‐modal MRI images. The experimental results demonstrate that the cascaded L‐Net and W‐Net model consistently outperforms individual models and other state‐of‐the‐art segmentation methods. The performance metrics such as the Dice Similarity Coefficient value achieved indicate high segmentation accuracy, while Sensitivity and Specificity metrics showcase the model's ability to correctly identify tumor regions and exclude healthy tissues. Moreover, the low Hausdorff Distance values confirm the model's capability to accurately delineate tumor boundaries. In comparison with the existing methods, the proposed cascaded scheme leverages the strengths of each network, leading to superior performance compared to existing works of literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Adapting Segment Anything Model for 3D Brain Tumor Segmentation With Missing Modalities.
- Author
-
Lei, Xiaoliang, Yu, Xiaosheng, Bai, Maocheng, Zhang, Jingsi, and Wu, Chengdong
- Subjects
- *
MAGNETIC resonance imaging , *BRAIN tumors , *IMAGE analysis , *DEEP learning , *DIAGNOSIS - Abstract
The problem of missing or unavailable magnetic resonance imaging modalities challenges clinical diagnosis and medical image analysis technology. Although the development of deep learning and the proposal of large models have improved medical analytics, this problem still needs to be better resolved.The purpose of this study was to efficiently adapt the Segment Anything Model, a two‐dimensional visual foundation model trained on natural images, to address the challenge of brain tumor segmentation with missing modalities. We designed a twin network structure that processes missing and intact magnetic resonance imaging (MRI) modalities separately using shared parameters. It involved comparing the features of two network branches to minimize differences between the feature maps derived from them. We added a multimodal adapter before the image encoder and a spatial–depth adapter before the mask decoder to fine‐tune the Segment Anything Model for brain tumor segmentation. The proposed method was evaluated using datasets provided by the MICCAI BraTS2021 Challenge. In terms of accuracy and robustness, the proposed method is better than existing solutions. The proposed method can segment brain tumors well under the missing modality condition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. EFU Net: Edge Information Fused 3D Unet for Brain Tumor Segmentation.
- Author
-
Yu WANG, Hengyi TIAN, and Minhua LIU
- Subjects
BRAIN tumors ,DEEP learning ,CELL proliferation ,COGNITION disorders ,TUMORS - Abstract
Brain tumors refer to abnormal cell proliferation formed in brain tissue, which can cause neurological dysfunction and cognitive impairment, posing a serious threat to human health. Therefore, it becomes a very challenging work to full-automaticly segment brain tumors using computers because of the mutual infiltration and fuzzy boundary between the focus areas and the normal brain tissue. To address the above issues, a segmentation method which integrates edge features is proposed in this paper. The overall segmentation architecture follows the encoder decoder structure, extracting rich features from the encoder. The first two layers of features are input to the edge attention module, and to extract tumor edge features which are fully fused with the features of the decoder segment. At the same time, an adaptive weighted mixed loss function is introduced to train the network by adaptively adjusting the weights of different loss parts in the training process. Relevant experiments were carried out using the public brain tumor data set. The Dice mean values of the proposed segmentation model in the whole tumor area (WT), the core tumor area (TC), and the enhancing tumor area (ET) reach 91.10%, 87.16%, and 88.86%, respectively, and the mean values of Hausdorff distance are 3.92, 5.12, and 1.92 mm, respectively. The experimental results showed that the proposed method can significantly improve segmentation accuracy, especially the segmentation effect of the edge part. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. HybridCSF model for magnetic resonance image based brain tumor segmentation.
- Author
-
Kataria, Jyoti and Panda, Supriya P.
- Subjects
CONVOLUTIONAL neural networks ,FUZZY clustering technique ,BRAIN tumors ,MAGNETIC resonance imaging ,SUPPORT vector machines - Abstract
The human brain comprises a complex interconnection of nerve cells and vital organs, which regulates crucial bodily processes. Although neurons commonly undergo developmental stages, they may occasionally experience abnormalities, leading to abnormal growths known as brain tumors. The objective of brain tumor segmentation is to produce precise boundaries of brain tumor regions. This study extensively analyzes deep learning methods for brain tumor detection, evaluating their effectiveness across diverse datasets. It introduces a hybrid model, which is proposed by the name HybriCSF: hybrid convolutional-SVM-fuzzy C-means model combining convolutional neural network (CNN) with the classifier support vector machine (SVM) and clustering technique fuzzy C-means (FCM). The proposed model was implemented on Br35H, BraTs 2020 and BraTs2021 datasets. The suggested model outperformed the existing methods by achieving 98.6% of accuracy on Br35H dataset and dice score of 0.63, 0.87, 0.81 on BraTs 2020 dataset for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively. The achieved dice scores on the BraTs 2021datasets are 0.89, 0.95, and 0.89 for ET, WT, and TC, respectively. The results show that the suggested model HybriCSF outperforms the other CNN-based models in terms of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. An Optimization Numerical Spiking Neural Membrane System with Adaptive Multi-Mutation Operators for Brain Tumor Segmentation.
- Author
-
Dong, Jianping, Zhang, Gexiang, Hu, Yangheng, Wu, Yijin, and Rong, Haina
- Subjects
- *
BRAIN tumors , *OPTIMIZATION algorithms , *MAGNETIC resonance imaging , *THRESHOLDING algorithms , *DIFFERENTIAL evolution - Abstract
Magnetic Resonance Imaging (MRI) is an important diagnostic technique for brain tumors due to its ability to generate images without tissue damage or skull artifacts. Therefore, MRI images are widely used to achieve the segmentation of brain tumors. This paper is the first attempt to discuss the use of optimization spiking neural P systems to improve the threshold segmentation of brain tumor images. To be specific, a threshold segmentation approach based on optimization numerical spiking neural P systems with adaptive multi-mutation operators (ONSNPSamos) is proposed to segment brain tumor images. More specifically, an ONSNPSamo with a multi-mutation strategy is introduced to balance exploration and exploitation abilities. At the same time, an approach combining the ONSNPSamo and connectivity algorithms is proposed to address the brain tumor segmentation problem. Our experimental results from CEC 2017 benchmarks (basic, shifted and rotated, hybrid, and composition function optimization problems) demonstrate that the ONSNPSamo is better than or close to 12 optimization algorithms. Furthermore, case studies from BraTS 2019 show that the approach combining the ONSNPSamo and connectivity algorithms can more effectively segment brain tumor images than most algorithms involved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Revolutionizing Brain Tumor Analysis: A Fusion of ChatGPT and Multi-Modal CNN for Unprecedented Precision.
- Author
-
Rawas, Soha and Samala, Agariadne Dwinggo
- Subjects
CHATGPT ,BRAIN tumors ,CONVOLUTIONAL neural networks ,NATURAL language processing ,MULTIMODAL user interfaces ,CHATBOTS - Abstract
In this study, we introduce an innovative approach to significantly enhance the precision and interpretability of brain tumor detection and segmentation. Our method ingeniously integrates the cutting-edge capabilities of the ChatGPT chatbot interface with a state-of-the-art multi-modal convolutional neural network (CNN). Tested rigorously on the BraTS dataset, our method showcases unprecedented performance, outperforming existing techniques in terms of both accuracy and efficiency, with an impressive Dice score of 0.89 for tumor segmentation. By seamlessly integrating ChatGPT, our model unveils deep-seated insights into the intricate decision-making processes, providing researchers and physicians with invaluable understanding and confidence in the results. This groundbreaking fusion holds immense promise, poised to revolutionize the landscape of medical imaging, with far-reaching implications for clinical practice and research. Our study exemplifies the transformative potential achieved through the synergistic combination of multi-modal CNNs and natural language processing, paving the way for remarkable advancements in brain tumor detection and segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. FLAIR MRI sequence synthesis using squeeze attention generative model for reliable brain tumor segmentation.
- Author
-
Al-Fakih, Abdulkhalek, Shazly, Abdullah, Mohammed, Abbas, Elbushnaq, Mohammed, Ryu, Kanghyun, Gu, Yeong Hyeon, Al-masni, Mohammed A., and Makary, Meena M.
- Subjects
BRAIN tumors ,GENERATIVE adversarial networks ,MAGNETIC resonance imaging ,DEEP learning ,DIFFUSION magnetic resonance imaging ,SQUEEZED light - Abstract
Manual segmentation of brain tumors using structural magnetic resonance imaging (MRI) is an arduous and time-consuming task. Therefore, automatic and robust segmentation will considerably influence neuro-oncological clinical trials by reducing excessive manual annotation time. Herein, we propose a deep learning model that automatically segments brain tumors even in cases of missing MRI sequences, which are common in practical clinical settings. To address this issue, we enhance a generative adversarial network (GAN) by incorporating a squeeze-and-excitation (SE) attention module into its generator and a PatchGAN into its discriminator. The SE module recalibrates channel responses by explicitly modeling interdependencies, enabling the network to focus on critical regions such as tumor areas. Our proposed generative model is optimized using a combination of adversarial, structural similarity, and mean absolute error losses to synthesize missing MRI sequences more effectively. This enhancement allows our model to synthesize the missing MRI sequence (fluid attenuated inversion recovery [FLAIR]) by leveraging information from other available sequences (T1-weighted, T2-weighted, or contrast-enhanced T1-weighted [T1ce]). For the segmentation task, we employ an optimized nnU-Net model, which is trained using existing sequences and evaluated using both available and synthesized sequences (including missing ones), mimicking real-world scenarios where often only limited MRI sequences are available or usable. Our findings reveal a notable enhancement in brain tumor segmentation, as indicated by a significant increase in overall the Dice similarity coefficient (DSC) from 0.688% (when FLAIR is missing) to 0.873% (when using synthesized FLAIR derived from T2). This improvement brings the segmentation performance closer to what was achieved when real FLAIR was available, where the DSC reaches 0.901%. Moreover, our synthesizing model was also tested on two additional datasets: the BraTS 2020 validation set and BraTS Africa 2023 training set, which produces results comparable to those of BraTS 2021, thereby proving its robustness and generalizability. In addition, the resulting tumor segmentations are subsequently employed to assess the response to treatment in cases where all sequences were available and when synthesis was employed, according to response assessment in neuro-oncology criteria. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Efficient Brain Tumor Segmentation with Lightweight Separable Spatial Convolutional Network.
- Author
-
Zhang, Hao, Liu, Meng, Qi, Yuan, Yang, Ning, Hu, Shunbo, Nie, Liqiang, and Zhang, Wenyin
- Subjects
BRAIN tumors ,BRAIN damage ,SUPPLY & demand ,BRAIN imaging - Abstract
Accurate and automated segmentation of lesions in brain MRI scans is crucial in diagnostics and treatment planning. Despite the significant achievements of existing approaches, they often require substantial computational resources and fail to fully exploit the synergy between low-level and high-level features. To address these challenges, we introduce the Separable Spatial Convolutional Network (SSCN), an innovative model that refines the U-Net architecture to achieve efficient brain tumor segmentation with minimal computational cost. SSCN integrates the PocketNet paradigm and replaces standard convolutions with depthwise separable convolutions, resulting in a significant reduction in parameters and computational load. Additionally, our feature complementary module enhances the interaction between features across the encoder-decoder structure, facilitating the integration of multi-scale features while maintaining low computational demands. The model also incorporates a separable spatial attention mechanism, enhancing its capability to discern spatial details. Empirical validations on standard datasets demonstrate the effectiveness of our proposed model, especially in segmenting small and medium-sized tumors, with only 0.27M parameters and 3.68 GFlops. Our code is available at https://github.com/zzpr/SSCN. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Segmentation of glioblastomas via 3D FusionNet
- Author
-
Xiangyu Guo, Botao Zhang, Yue Peng, Feng Chen, and Wenbin Li
- Subjects
brain tumor segmentation ,MRI ,U-net ,SegNet ,3D deep learning model ,Neoplasms. Tumors. Oncology. Including cancer and carcinogens ,RC254-282 - Abstract
IntroductionThis study presented an end-to-end 3D deep learning model for the automatic segmentation of brain tumors.MethodsThe MRI data used in this study were obtained from a cohort of 630 GBM patients from the University of Pennsylvania Health System (UPENN-GBM). Data augmentation techniques such as flip and rotations were employed to further increase the sample size of the training set. The segmentation performance of models was evaluated by recall, precision, dice score, Lesion False Positive Rate (LFPR), Average Volume Difference (AVD) and Average Symmetric Surface Distance (ASSD).ResultsWhen applying FLAIR, T1, ceT1, and T2 MRI modalities, FusionNet-A and FusionNet-C the best-performing model overall, with FusionNet-A particularly excelling in the enhancing tumor areas, while FusionNet-C demonstrates strong performance in the necrotic core and peritumoral edema regions. FusionNet-A excels in the enhancing tumor areas across all metrics (0.75 for recall, 0.83 for precision and 0.74 for dice scores) and also performs well in the peritumoral edema regions (0.77 for recall, 0.77 for precision and 0.75 for dice scores). Combinations including FLAIR and ceT1 tend to have better segmentation performance, especially for necrotic core regions. Using only FLAIR achieves a recall of 0.73 for peritumoral edema regions. Visualization results also indicate that our model generally achieves segmentation results similar to the ground truth.DiscussionFusionNet combines the benefits of U-Net and SegNet, outperforming the tumor segmentation performance of both. Although our model effectively segments brain tumors with competitive accuracy, we plan to extend the framework to achieve even better segmentation performance.
- Published
- 2024
- Full Text
- View/download PDF
29. Deep Learning-Based Brain Tumor Segmentation—An Overview
- Author
-
Kataria, Jyoti, Panda, Supriya P., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Santosh, KC, editor, Nandal, Poonam, editor, Sood, Sandeep Kumar, editor, and Pandey, Hari Mohan, editor
- Published
- 2024
- Full Text
- View/download PDF
30. Causal Intervention for Brain Tumor Segmentation
- Author
-
Liu, Hengxin, Li, Qiang, Nie, Weizhi, Xu, Zibo, Liu, Anan, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Linguraru, Marius George, editor, Dou, Qi, editor, Feragen, Aasa, editor, Giannarou, Stamatia, editor, Glocker, Ben, editor, Lekadir, Karim, editor, and Schnabel, Julia A., editor
- Published
- 2024
- Full Text
- View/download PDF
31. Volumetric Brain Tumor Segmentation Using V-Net
- Author
-
Uppal, Doli, Ananda, Maramreddy Krishna, Prakash, Mudavath Bhanu, Prakash, Surya, Bansal, Jagdish Chand, Series Editor, Deep, Kusum, Series Editor, Nagar, Atulya K., Series Editor, Goar, Vishal, editor, Sharma, Aditi, editor, Shin, Jungpil, editor, and Mridha, M. Firoz, editor
- Published
- 2024
- Full Text
- View/download PDF
32. Automatic Brain Tumor Segmentation Using Convolutional Neural Networks: U-Net Framework with PSO-Tuned Hyperparameters
- Author
-
Saifullah, Shoffan, Dreżewski, Rafał, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Affenzeller, Michael, editor, Winkler, Stephan M., editor, Kononova, Anna V., editor, Trautmann, Heike, editor, Tušar, Tea, editor, Machado, Penousal, editor, and Bäck, Thomas, editor
- Published
- 2024
- Full Text
- View/download PDF
33. A 3D-2D Hybrid Network with Regional Awareness and Global Fusion for Brain Tumor Segmentation
- Author
-
Zhao, Wenxiu, Dongye, Changlei, Wang, Yumei, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Zhang, Chuanlei, editor, and Zhang, Qinhu, editor
- Published
- 2024
- Full Text
- View/download PDF
34. Brain Tumor Segmentation with FPN-Based EfficientNet and XAI
- Author
-
Thai-Nghe, Nguyen, Van Kiet, Vo, Huu-Hoa, Nguyen, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nguyen, Ngoc Thanh, editor, Chbeir, Richard, editor, Manolopoulos, Yannis, editor, Fujita, Hamido, editor, Hong, Tzung-Pei, editor, Nguyen, Le Minh, editor, and Wojtkiewicz, Krystian, editor
- Published
- 2024
- Full Text
- View/download PDF
35. Incomplete Multimodal Learning with Modality-Aware Feature Interaction for Brain Tumor Segmentation
- Author
-
Cheng, Jianhong, Feng, Rui, Li, Jinyang, Xu, Jun, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Peng, Wei, editor, Cai, Zhipeng, editor, and Skums, Pavel, editor
- Published
- 2024
- Full Text
- View/download PDF
36. Brain Tumor Segmentation Using Ensemble CNN-Transfer Learning Models: DeepLabV3plus and ResNet50 Approach
- Author
-
Saifullah, Shoffan, Dreżewski, Rafał, Hartmanis, Juris, Founding Editor, van Leeuwen, Jan, Series Editor, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Kobsa, Alfred, Series Editor, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Nierstrasz, Oscar, Series Editor, Pandu Rangan, C., Editorial Board Member, Sudan, Madhu, Series Editor, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Vardi, Moshe Y, Series Editor, Goos, Gerhard, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Franco, Leonardo, editor, de Mulatier, Clélia, editor, Paszynski, Maciej, editor, Krzhizhanovskaya, Valeria V., editor, Dongarra, Jack J., editor, and Sloot, Peter M. A., editor
- Published
- 2024
- Full Text
- View/download PDF
37. Brain Tumor Segmentation Using Gaussian-Based U-Net Architecture
- Author
-
Saran Raj, Sowrirajan, Logeshwaran, K. S., Anisha Devi, K., Avinash, Mohan Krishna, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Nanda, Satyasai Jagannath, editor, Yadav, Rajendra Prasad, editor, Gandomi, Amir H., editor, and Saraswat, Mukesh, editor
- Published
- 2024
- Full Text
- View/download PDF
38. Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
- Author
-
Huynh, Tuan-Luc, Le, Thanh-Danh, Nguyen, Tam V., Le, Trung-Nghia, Tran, Minh-Triet, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Yan, Wei Qi, editor, Nguyen, Minh, editor, Nand, Parma, editor, and Li, Xuejun, editor
- Published
- 2024
- Full Text
- View/download PDF
39. Squeeze Excitation Embedded Attention U-Net for Brain Tumor Segmentation
- Author
-
Prasanna, Gaurav, Ernest, John Rohit, Lalitha, G., Narayanan, Sathiya, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Gabbouj, Moncef, editor, Pandey, Shyam Sudhir, editor, Garg, Hari Krishna, editor, and Hazra, Ranjay, editor
- Published
- 2024
- Full Text
- View/download PDF
40. Deep Learning Based Lightweight Model for Brain Tumor Classification and Segmentation
- Author
-
Andleeb, Ifrah, Hussain, B. Zahid, Ansari, Salik, Ansari, Mohammad Samar, Kanwal, Nadia, Aslam, Asra, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Naik, Nitin, editor, Jenkins, Paul, editor, Grace, Paul, editor, Yang, Longzhi, editor, and Prajapat, Shaligram, editor
- Published
- 2024
- Full Text
- View/download PDF
41. Optimizing Brain Tumor Segmentation Through CNN U-Net with CLAHE-HE Image Enhancement
- Author
-
Saifullah, Shoffan, Suryotomo, Andiko Putro, Dreżewski, Rafał, Tanone, Radius, Tundo, Tundo, Luo, Xun, Editor-in-Chief, Almohammedi, Akram A., Series Editor, Chen, Chi-Hua, Series Editor, Guan, Steven, Series Editor, Pamucar, Dragan, Series Editor, Putro Suryotomo, Andiko, editor, and Cahya Rustamaji, Heru, editor
- Published
- 2024
- Full Text
- View/download PDF
42. A Comprehensive Multi-modal Domain Adaptative Aid Framework for Brain Tumor Diagnosis
- Author
-
Chu, Wenxiu, Zhou, Yudan, Cai, Shuhui, Chen, Zhong, Cai, Congbo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Liu, Qingshan, editor, Wang, Hanzi, editor, Ma, Zhanyu, editor, Zheng, Weishi, editor, Zha, Hongbin, editor, Chen, Xilin, editor, Wang, Liang, editor, and Ji, Rongrong, editor
- Published
- 2024
- Full Text
- View/download PDF
43. MagNET: Modality-Agnostic Network for Brain Tumor Segmentation and Characterization with Missing Modalities
- Author
-
Konwer, Aishik, Chen, Chao, Prasanna, Prateek, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Cao, Xiaohuan, editor, Xu, Xuanang, editor, Rekik, Islem, editor, Cui, Zhiming, editor, and Ouyang, Xi, editor
- Published
- 2024
- Full Text
- View/download PDF
44. HAB‐Net: Hierarchical asymmetric convolution and boundary enhancement network for brain tumor segmentation
- Author
-
Yuanjing Hu, Aibin Huang, and Rui Xu
- Subjects
Brain tumor segmentation ,Boundary attention ,hierarchical convolution ,magnetic resonance images ,Transformer ,Photography ,TR1-1050 ,Computer software ,QA76.75-76.765 - Abstract
Abstract Brain tumour segmentation (BTS) is crucial for diagnosis and treatment planning by delineating tumour boundaries and subregions in multi‐modality bio‐imaging data. Several BTS models have been proposed to address specific technical challenges encountered in this field. However, accurately capturing intricate tumour structures and boundaries remains a difficult task. To overcome this challenge, HAB‐Net, a model that combines the strengths of convolutional neural networks and transformer architectures, is presented. HAB‐Net incorporates a custom‐designed hierarchical and pseudo‐convolutional module called hierarchical asymmetric convolutions (HAC). In the encoder, a coordinate attention is included to extract feature maps. Additionally, swin transformer, which has a self‐attention mechanism, is integrated to effectively capture long‐range relationships. Moreover, the decoder is enhanced with a boundary attention module (BAM) to improve boundary information and overall segmentation performance. Extensive evaluations conducted on the BraTS2018 and BraTS2021 datasets demonstrate significant improvements in segmentation accuracy for tumour regions.
- Published
- 2024
- Full Text
- View/download PDF
45. GETNet: Group Normalization Shuffle and Enhanced Channel Self-Attention Network Based on VT-UNet for Brain Tumor Segmentation.
- Author
-
Guo, Bin, Cao, Ning, Zhang, Ruihao, and Yang, Peng
- Subjects
- *
BRAIN tumors , *DEEP learning , *TRANSFORMER models , *DATA mining , *CONVOLUTIONAL neural networks - Abstract
Currently, brain tumors are extremely harmful and prevalent. Deep learning technologies, including CNNs, UNet, and Transformer, have been applied in brain tumor segmentation for many years and have achieved some success. However, traditional CNNs and UNet capture insufficient global information, and Transformer cannot provide sufficient local information. Fusing the global information from Transformer with the local information of convolutions is an important step toward improving brain tumor segmentation. We propose the Group Normalization Shuffle and Enhanced Channel Self-Attention Network (GETNet), a network combining the pure Transformer structure with convolution operations based on VT-UNet, which considers both global and local information. The network includes the proposed group normalization shuffle block (GNS) and enhanced channel self-attention block (ECSA). The GNS is used after the VT Encoder Block and before the downsampling block to improve information extraction. An ECSA module is added to the bottleneck layer to utilize the characteristics of the detailed features in the bottom layer effectively. We also conducted experiments on the BraTS2021 dataset to demonstrate the performance of our network. The Dice coefficient (Dice) score results show that the values for the regions of the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) were 91.77, 86.03, and 83.64, respectively. The results show that the proposed model achieves state-of-the-art performance compared with more than eleven benchmarks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Estimation of Fractal Dimension and Segmentation of Brain Tumor with Parallel Features Aggregation Network.
- Author
-
Sultan, Haseeb, Ullah, Nadeem, Hong, Jin Seong, Kim, Seung Gu, Lee, Dong Chan, Jung, Seung Yong, and Park, Kang Ryoung
- Subjects
- *
FRACTAL dimensions , *BRAIN tumors , *PETRI nets , *DEEP learning , *DATABASES , *CANCER invasiveness - Abstract
The accurate recognition of a brain tumor (BT) is crucial for accurate diagnosis, intervention planning, and the evaluation of post-intervention outcomes. Conventional methods of manually identifying and delineating BTs are inefficient, prone to error, and time-consuming. Subjective methods for BT recognition are biased because of the diffuse and irregular nature of BTs, along with varying enhancement patterns and the coexistence of different tumor components. Hence, the development of an automated diagnostic system for BTs is vital for mitigating subjective bias and achieving speedy and effective BT segmentation. Recently developed deep learning (DL)-based methods have replaced subjective methods; however, these DL-based methods still have a low performance, showing room for improvement, and are limited to heterogeneous dataset analysis. Herein, we propose a DL-based parallel features aggregation network (PFA-Net) for the robust segmentation of three different regions in a BT scan, and we perform a heterogeneous dataset analysis to validate its generality. The parallel features aggregation (PFA) module exploits the local radiomic contextual spatial features of BTs at low, intermediate, and high levels for different types of tumors and aggregates them in a parallel fashion. To enhance the diagnostic capabilities of the proposed segmentation framework, we introduced the fractal dimension estimation into our system, seamlessly combined as an end-to-end task to gain insights into the complexity and irregularity of structures, thereby characterizing the intricate morphology of BTs. The proposed PFA-Net achieves the Dice scores (DSs) of 87.54%, 93.42%, and 91.02%, for the enhancing tumor region, whole tumor region, and tumor core region, respectively, with the multimodal brain tumor segmentation (BraTS)-2020 open database, surpassing the performance of existing state-of-the-art methods. Additionally, PFA-Net is validated with another open database of brain tumor progression and achieves a DS of 64.58% for heterogeneous dataset analysis, surpassing the performance of existing state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. SPFC NET: Spatial pyramid feature convolution network for brain tumor segmentation in mri images.
- Author
-
Dheepa, G. and Chithra, PL.
- Subjects
MAGNETIC resonance imaging ,BRAIN tumors ,IMAGE segmentation ,PYRAMIDS ,MEDICAL needs assessment - Abstract
Accurate segmentation of brain tumor from Magnetic Resonance Imaging (MRI) is an essential task for medical assessments like treatment planning and evaluation. The 3-Dimensional nature of MRI images poses challenges, like memory and computational power. This research work proposed a novel SPFC network (Spatial Pyramid Feature Convolution network) to overwhelm existing limitations and effectively segment complete, core and enhanced tumor regions from MRI brain images. All image slices from BRATS-2018 training dataset are first pre-processed using a contour curve to remove the insignificant background pixels. Then, these pre-processed slices are subsequently processed again in the SPFC network to extract the spatial features from all input slices through the cascaded pyramidal convolutions. This SPFC network contains two kinds of layers: downsampling layers composed with the hierarchy of three SPF (Spatial Pyramid Feature) blocks and two max-pooling; upsampling layers containing the hierarchy of two unpooling and two SPF blocks. The outcome of upsampling layers is then processed using a sigmoid function for segmenting complete, core and enhanced tumor regions. Further, a tissue-type mapping technique is effectively applied over these segmented tumor regions to find the tumor volume and their probability density distribution. It is observed that the proposed method achieves an F1-score of 0.95, 0.97 and 0.99 for complete, core and enhanced regions, which is 7%, 21% and 22% of higher results respectively than the state-of-art segmentation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Redefining brain tumor segmentation: a cutting-edge convolutional neural networks-transfer learning approach.
- Author
-
Saifullah, Shoffan and Dreżzewski, Rafał
- Subjects
BRAIN tumors ,CONVOLUTIONAL neural networks ,COMPUTER-assisted image analysis (Medicine) ,MAGNETIC resonance imaging ,IMAGE analysis ,DEEP learning - Abstract
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to precisely delineate tumor boundaries from magnetic resonance imaging (MRI) scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The model is rigorously trained and evaluated, exhibiting remarkable performance metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of our proposed model. These findings underscore the model's competence in precise brain tumor localization, underscoring its potential to revolutionize medical image analysis and enhance healthcare outcomes. This research paves the way for future exploration and optimization of advanced CNN models in medical imaging, emphasizing addressing false positives and resource efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. HAB‐Net: Hierarchical asymmetric convolution and boundary enhancement network for brain tumor segmentation.
- Author
-
Hu, Yuanjing, Huang, Aibin, and Xu, Rui
- Subjects
- *
BRAIN tumors , *TRANSFORMER models , *CONVOLUTIONAL neural networks , *FEATURE extraction , *PHOTOTHERMAL effect - Abstract
Brain tumour segmentation (BTS) is crucial for diagnosis and treatment planning by delineating tumour boundaries and subregions in multi‐modality bio‐imaging data. Several BTS models have been proposed to address specific technical challenges encountered in this field. However, accurately capturing intricate tumour structures and boundaries remains a difficult task. To overcome this challenge, HAB‐Net, a model that combines the strengths of convolutional neural networks and transformer architectures, is presented. HAB‐Net incorporates a custom‐designed hierarchical and pseudo‐convolutional module called hierarchical asymmetric convolutions (HAC). In the encoder, a coordinate attention is included to extract feature maps. Additionally, swin transformer, which has a self‐attention mechanism, is integrated to effectively capture long‐range relationships. Moreover, the decoder is enhanced with a boundary attention module (BAM) to improve boundary information and overall segmentation performance. Extensive evaluations conducted on the BraTS2018 and BraTS2021 datasets demonstrate significant improvements in segmentation accuracy for tumour regions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. SARFNet: Selective Layer and Axial Receptive Field Network for Multimodal Brain Tumor Segmentation.
- Author
-
Guo, Bin, Cao, Ning, Yang, Peng, and Zhang, Ruihao
- Subjects
BRAIN tumors ,CONVOLUTIONAL neural networks ,MAGNETIC resonance imaging ,FEATURE extraction ,MARKOV random fields - Abstract
Efficient magnetic resonance imaging (MRI) segmentation, which is helpful for treatment planning, is essential for identifying brain tumors from detailed images. In recent years, various convolutional neural network (CNN) structures have been introduced for brain tumor segmentation tasks and have performed well. However, the downsampling blocks of most existing methods are typically used only for processing the variation in image sizes and lack sufficient capacity for further extraction features. We, therefore, propose SARFNet, a method based on UNet architecture, which consists of the proposed SL
i RF module and advanced AAM module. The SLi RF downsampling module can extract feature information and prevent the loss of important information while reducing the image size. The AAM block, incorporated into the bottleneck layer, captures more contextual information. The Channel Attention Module (CAM) is introduced into skip connections to enhance the connections between channel features to improve accuracy and produce better feature expression. Ultimately, deep supervision is utilized in the decoder layer to avoid vanishing gradients and generate better feature representations. Many experiments were performed to validate the effectiveness of our model on the BraTS2018 dataset. SARFNet achieved Dice coefficient scores of 90.40, 85.54, and 82.15 for the whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively. The results show that the proposed model achieves state-of-the-art performance compared with twelve or more benchmarks. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.