1,576 results on '"Brain tumor segmentation"'
Search Results
202. Brain Tumor Segmentation in mpMRI Scans (BraTS-2021) Using Models Based on U-Net Architecture
- Author
-
Maurya, Satyajit, Kumar Yadav, Virendra, Agarwal, Sumeet, Singh, Anup, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Crimi, Alessandro, editor, and Bakas, Spyridon, editor
- Published
- 2022
- Full Text
- View/download PDF
203. Optimized U-Net for Brain Tumor Segmentation
- Author
-
Futrega, Michał, Milesi, Alexandre, Marcinkiewicz, Michał, Ribalta, Pablo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Crimi, Alessandro, editor, and Bakas, Spyridon, editor
- Published
- 2022
- Full Text
- View/download PDF
204. MS UNet: Multi-scale 3D UNet for Brain Tumor Segmentation
- Author
-
Ahmad, Parvez, Qamar, Saqib, Shen, Linlin, Rizvi, Syed Qasim Afser, Ali, Aamir, Chetty, Girija, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Crimi, Alessandro, editor, and Bakas, Spyridon, editor
- Published
- 2022
- Full Text
- View/download PDF
205. Brain Tumor Segmentation from Multiparametric MRI Using a Multi-encoder U-Net Architecture
- Author
-
Alam, Saruar, Halandur, Bharath, Mana, P. G. L. Porta, Goplen, Dorota, Lundervold, Arvid, Lundervold, Alexander Selvikvåg, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Crimi, Alessandro, editor, and Bakas, Spyridon, editor
- Published
- 2022
- Full Text
- View/download PDF
206. Quality-Aware Model Ensemble for Brain Tumor Segmentation
- Author
-
Wang, Kang, Wang, Haoran, Li, Zeyang, Pan, Mingyuan, Wang, Manning, Wang, Shuo, Song, Zhijian, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Crimi, Alessandro, editor, and Bakas, Spyridon, editor
- Published
- 2022
- Full Text
- View/download PDF
207. Deep Learning Based Ensemble Approach for 3D MRI Brain Tumor Segmentation
- Author
-
Do, Tien-Bach-Thanh, Trinh, Dang-Linh, Tran, Minh-Trieu, Lee, Guee-Sang, Kim, Soo-Hyung, Yang, Hyung-Jeong, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Crimi, Alessandro, editor, and Bakas, Spyridon, editor
- Published
- 2022
- Full Text
- View/download PDF
208. Disparity Autoencoders for Multi-class Brain Tumor Segmentation
- Author
-
Bangalore Yogananda, Chandan Ganesh, Das, Yudhajit, Wagner, Benjamin C., Nalawade, Sahil S., Reddy, Divya, Holcomb, James, Pinho, Marco C., Fei, Baowei, Madhuranthakam, Ananth J., Maldjian, Joseph A., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Crimi, Alessandro, editor, and Bakas, Spyridon, editor
- Published
- 2022
- Full Text
- View/download PDF
209. Extending nn-UNet for Brain Tumor Segmentation
- Author
-
Luu, Huan Minh, Park, Sung-Hong, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Crimi, Alessandro, editor, and Bakas, Spyridon, editor
- Published
- 2022
- Full Text
- View/download PDF
210. Brain Tumor Segmentation in Multi-parametric Magnetic Resonance Imaging Using Model Ensembling and Super-resolution
- Author
-
Jiang, Zhifan, Zhao, Can, Liu, Xinyang, Linguraru, Marius George, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Crimi, Alessandro, editor, and Bakas, Spyridon, editor
- Published
- 2022
- Full Text
- View/download PDF
211. Segmentation of Brain Tumors with Multi-kernel Fuzzy C-means Clustering in MRI
- Author
-
Robert Singh, A., Athisayamani, Suganya, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Bhateja, Vikrant, editor, Khin Wee, Lai, editor, Lin, Jerry Chun-Wei, editor, Satapathy, Suresh Chandra, editor, and Rajesh, T. M., editor
- Published
- 2022
- Full Text
- View/download PDF
212. Comparison Study on Some Convolutional Neural Networks for Cerebral MRI Images Segmentation
- Author
-
Moujahid, Hicham, Cherradi, Bouchaib, Bahatti, Lhoussain, Elhoseny, Mohamed, Series Editor, Yuan, Xiaohui, Series Editor, and Krit, Salah-ddine, editor
- Published
- 2022
- Full Text
- View/download PDF
213. Brain Tumor Segmentation Based on 2D U-Net Using MRI Multi-modalities Brain Images
- Author
-
Tene-Hurtado, Daniela, Almeida-Galárraga, Diego A., Villalba-Meneses, Gandhi, Alvarado-Cando, Omar, Cadena-Morejón, Carolina, Salazar, Valeria Herrera, Orozco-López, Onofre, Tirado-Espín, Andrés, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Narváez, Fabián R., editor, Proaño, Julio, editor, Morillo, Paulina, editor, Vallejo, Diego, editor, González Montoya, Daniel, editor, and Díaz, Gloria M., editor
- Published
- 2022
- Full Text
- View/download PDF
214. Automated Brain Tumor Segmentation and Classification Through MRI Images
- Author
-
Gull, Sahar, Akbar, Shahzad, Hassan, Syed Ale, Rehman, Amjad, Sadad, Tariq, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Liatsis, Panos, editor, Hussain, Abir, editor, Mostafa, Salama A., editor, and Al-Jumeily, Dhiya, editor
- Published
- 2022
- Full Text
- View/download PDF
215. A Study on Brain Tumor Segmentation in Noisy Magnetic Resonance Images
- Author
-
Shivhare, Shiv Naresh, Kumar, Nitin, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Gupta, Gaurav, editor, Wang, Lipo, editor, Yadav, Anupam, editor, Rana, Puneet, editor, and Wang, Zhenyu, editor
- Published
- 2022
- Full Text
- View/download PDF
216. A Review: Recent Automatic Algorithms for the Segmentation of Brain Tumor MRI
- Author
-
Rafi, Asra, Khan, Zia, Aslam, Faiza, Jawed, Soyeba, Shafique, Ayesha, Ali, Haider, Xhafa, Fatos, Series Editor, Boulouard, Zakaria, editor, Ouaissa, Mariya, editor, Ouaissa, Mariyam, editor, and El Himer, Sarah, editor
- Published
- 2022
- Full Text
- View/download PDF
217. Comparative Analysis of Brain Tumor Segmentation with Fuzzy C-Means Using Multicore CPU and CUDA on GPU
- Author
-
Sahana, Sowmya, S., Narendra, V., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Tavares, João Manuel R. S., editor, Dutta, Paramartha, editor, Dutta, Soumi, editor, and Samanta, Debabrata, editor
- Published
- 2022
- Full Text
- View/download PDF
218. A Dual Supervision Guided Attentional Network for Multimodal MR Brain Tumor Segmentation
- Author
-
Zhou, Tongxue, Canu, Stéphane, Vera, Pierre, Ruan, Su, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Su, Ruidan, editor, Zhang, Yu-Dong, editor, and Liu, Han, editor
- Published
- 2022
- Full Text
- View/download PDF
219. A lightweight hierarchical convolution network for brain tumor segmentation
- Author
-
Yuhu Wang, Yuzhen Cao, Jinqiu Li, Hongtao Wu, Shuo Wang, Xinming Dong, and Hui Yu
- Subjects
Brain tumor segmentation ,Lightweight network ,Deep learning ,Convolutional neural network ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Biology (General) ,QH301-705.5 - Abstract
Abstract Background Brain tumor segmentation plays a significant role in clinical treatment and surgical planning. Recently, several deep convolutional networks have been proposed for brain tumor segmentation and have achieved impressive performance. However, most state-of-the-art models use 3D convolution networks, which require high computational costs. This makes it difficult to apply these models to medical equipment in the future. Additionally, due to the large diversity of the brain tumor and uncertain boundaries between sub-regions, some models cannot well-segment multiple tumors in the brain at the same time. Results In this paper, we proposed a lightweight hierarchical convolution network, called LHC-Net. Our network uses a multi-scale strategy which the common 3D convolution is replaced by the hierarchical convolution with residual-like connections. It improves the ability of multi-scale feature extraction and greatly reduces parameters and computation resources. On the BraTS2020 dataset, LHC-Net achieves the Dice scores of 76.38%, 90.01% and 83.32% for ET, WT and TC, respectively, which is better than that of 3D U-Net with 73.50%, 89.42% and 81.92%. Especially on the multi-tumor set, our model shows significant performance improvement. In addition, LHC-Net has 1.65M parameters and 35.58G FLOPs, which is two times fewer parameters and three times less computation compared with 3D U-Net. Conclusion Our proposed method achieves automatic segmentation of tumor sub-regions from four-modal brain MRI images. LHC-Net achieves competitive segmentation performance with fewer parameters and less computation than the state-of-the-art models. It means that our model can be applied under limited medical computing resources. By using the multi-scale strategy on channels, LHC-Net can well-segment multiple tumors in the patient’s brain. It has great potential for application to other multi-scale segmentation tasks.
- Published
- 2022
- Full Text
- View/download PDF
220. Focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention.
- Author
-
Li Zongren, Wushouer Silamu, Feng Shurui, and Yan Guanghui
- Subjects
TRANSFORMER models ,BRAIN tumors ,CONVOLUTIONAL neural networks ,COMPUTER vision - Abstract
Introduction: Recently, the Transformer model and its variants have been a great success in terms of computer vision, and have surpassed the performance of convolutional neural networks (CNN). The key to the success of Transformer vision is the acquisition of short-term and long-term visual dependencies through selfattention mechanisms; this technology can efficiently learn global and remote semantic information interactions. However, there are certain challenges associated with the use of Transformers. The computational cost of the global self-attention mechanism increases quadratically, thus hindering the application of Transformers for high-resolution images. Methods: In view of this, this paper proposes a multi-view brain tumor segmentation model based on cross windows and focal self-attention which represents a novel mechanism to enlarge the receptive field by parallel cross windows and improve global dependence by using local fine-grained and global coarse-grained interactions. First, the receiving field is increased by parallelizing the self-attention of horizontal and vertical fringes in the cross window, thus achieving strong modeling capability while limiting the computational cost. Second, the focus on self-attention with regards to local fine-grained and global coarse-grained interactions enables the model to capture short-term and longterm visual dependencies in an efficient manner. Results: Finally, the performance of the model on Brats2021 verification set is as follows: dice Similarity Score of 87.28, 87.35 and 93.28%; Hausdorff Distance (95%) of 4.58 mm, 5.26 mm, 3.78 mm for the enhancing tumor, tumor core and whole tumor, respectively. Discussion: In summary, the model proposed in this paper has achieved excellent performance while limiting the computational cost. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
221. An intelligent brain tumor segmentation using improved Deep Learning Model Based on Cascade Regression method.
- Author
-
V.K, Deepak and R, Sarath
- Subjects
DEEP learning ,BRAIN tumors ,CONVOLUTIONAL neural networks ,MACHINE learning ,MAGNETIC resonance imaging ,IMAGE segmentation - Abstract
The brain tumor is formed by abnormal cells that develop and reproduce unpredictably. A timely diagnosis of a brain tumor amplifies the likelihood of living for the patient. Specialists generally deploy a manual methodology of segmentation when diagnosing brain tumours. In medical image processing, brain tumour fragmentation is significant. The Physicians typically employed a manual process of fragmentation when identifying brain tumours. It is not exact, is subject to inter-and intra-observer variability, and may include non-enhancing tissue. It is also time demanding. A new and Improved Deep Learning Model formulated on the Cascade Regression method (DLCR) is proposed for image segmentation to resolve these issues. The proposed method uses the normalization procedure for Pre-processing of Magnetic resonance imaging (MRI) images using Fully Convolutional Neural Network (FCNN) method. Then the feature extraction using the Gaussian Mixture Model (GMM) is utilized to to decrease the data and obtain the relevant characteristic from every feature vector. Then the Current methodologies, namely Machine Learning Predictive Model (MLPM), Deep Learning Framework (DLF) and Extreme Learning Machine Local Receptive Fields (ELM-LRF) were compared to our suggested method. The results show the proposed DLCR method has achieved a better sensitivity, specificity, recall ratio, precision ratio, peak signal-to-noise ratio (PSNR), and low Root Mean Square Error (RMSE) than the existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
222. Semi-supervised multiple evidence fusion for brain tumor segmentation.
- Author
-
Huang, Ling, Ruan, Su, and Denœux, Thierry
- Subjects
- *
SUPERVISED learning , *DEEP learning , *BRAIN tumors , *MACHINE learning , *DEMPSTER-Shafer theory - Abstract
The performance of deep learning-based methods depends mainly on the availability of large-scale labeled learning data. However, obtaining precisely annotated examples is challenging in the medical domain. Although some semi-supervised deep learning methods have been proposed to train models with fewer labels, only a few studies have focused on the uncertainty caused by the low quality of the images and the lack of annotations. This paper addresses the above issues using Dempster-Shafer theory and deep learning: 1) a semi-supervised learning algorithm is proposed based on an image transformation strategy; 2) a probabilistic deep neural network and an evidential neural network are used in parallel to provide two sources of segmentation evidence; 3) Dempster's rule is used to combine the two pieces of evidence and reach a final segmentation result. Results from a series of experiments on the BraTS2019 brain tumor dataset show that our framework achieves promising results when only some training data are labeled. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
223. Improving brain tumor segmentation performance using CycleGAN based feature extraction.
- Author
-
Azni, Hamed Mohammadi, Afsharchi, Mohsen, and Allahverdi, Armin
- Subjects
BRAIN tumors ,FEATURE extraction ,GENERATIVE adversarial networks ,IMAGE segmentation ,DEEP learning ,MAGNETIC resonance imaging - Abstract
The analysis of brain tumors plays a significant role in medical applications and provides a huge amount of anatomic and functional information. Automatic tumor segmentation is one of the most challenging issues among radiologists and other specialists intent on lowering and eliminating manual detection errors and speeding up the detection of tissue types. In recent years, automatic segmentation combined with deep learning has been proven to be more powerful than traditional approaches. This study introduces a two-step model for the segmentation of brain tumors on multi-channel MRI images based on the Generative Adversarial Network (GAN). First of all, we use CycleGAN and two segmentors added to its structure to train networks that produce new features appropriate for segmentation. Different modules of MRI images are fed into these two-way networks, and the same modules, along with the target tumor segment, are requested at the output. In addition to segmentation, features created in the middle layers of these networks are capable of mapping images to one another. In the second step, the transfer learning technique is used to extract the related subnets and inject the features produced into the main segmentation network. In fact, the present study attempts to find features with greater detection power by converting different MRI images to each other. It is assumed that features beneficial in the conversion of different MRI image modules to each other will also improve segmentation performance. The proposed method is evaluated on BraTS 2018, and the results demonstrate the superiority of this method over the majority of existing approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
224. 基于双重注意力机制和迭代聚合U-Net的脑肿瘤 MR图像分割方法.
- Author
-
周煜松, 陈罗林, 王统, and 徐胜舟
- Abstract
Copyright of Journal of South-Central Minzu University (Natural Science Edition) is the property of Journal of South-Central Minzu University (Natural Science Edition) Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
225. Brain Tumor Segmentation Network with Multi-View Ensemble Discrimination and Kernel-Sharing Dilated Convolution.
- Author
-
Guan, Xin, Zhao, Yushan, Nyatega, Charles Okanda, and Li, Qiang
- Subjects
- *
BRAIN tumors , *CONVOLUTIONAL neural networks , *MAGNETIC resonance imaging - Abstract
Accurate segmentation of brain tumors from magnetic resonance 3D images (MRI) is critical for clinical decisions and surgical planning. Radiologists usually separate and analyze brain tumors by combining images of axial, coronal, and sagittal views. However, traditional convolutional neural network (CNN) models tend to use information from only a single view or one by one. Moreover, the existing models adopt a multi-branch structure with different-size convolution kernels in parallel to adapt to various tumor sizes. However, the difference in the convolution kernels' parameters cannot precisely characterize the feature similarity of tumor lesion regions with various sizes, connectivity, and convexity. To address the above problems, we propose a hierarchical multi-view convolution method that decouples the standard 3D convolution into axial, coronal, and sagittal views to provide complementary-view features. Then, every pixel is classified by ensembling the discriminant results from the three views. Moreover, we propose a multi-branch kernel-sharing mechanism with a dilated rate to obtain parameter-consistent convolution kernels with different receptive fields. We use the BraTS2018 and BraTS2020 datasets for comparison experiments. The average Dice coefficients of the proposed network on the BraTS2020 dataset can reach 78.16%, 89.52%, and 83.05% for the enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively, while the number of parameters is only 0.5 M. Compared with the baseline network for brain tumor segmentation, the accuracy was improved by 1.74%, 0.5%, and 2.19%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
226. MSFR-Net: Multi-modality and single-modality feature recalibration network for brain tumor segmentation.
- Author
-
Xiang Li, Yuchen Jiang, Minglei Li, Jiusi Zhang, Shen Yin, and Hao Luo
- Subjects
- *
BRAIN tumors , *DRUG labeling , *MANUAL labor , *MAGNETIC resonance imaging - Abstract
Background: Accurate and automated brain tumor segmentation from multimodality MR images plays a significant role in tumor treatment. However, the existing approaches mainly focus on the fusion of multi-modality while ignoring the correlation between single-modality and tumor subcomponents. For example, T2-weighted images show good visualization of edema, and T1-contrast images have a good contrast between enhancing tumor core and necrosis. In the actual clinical process, professional physicians also label tumors according to these characteristics.We design a method for brain tumors segmentation that utilizes both multi-modality fusion and single-modality characteristics. Methods: A multi-modality and single-modality feature recalibration network (MSFR-Net) is proposed for brain tumor segmentation from MR images.Specifically,multi-modality information and single-modality information are assigned to independent pathways. Multi-modality network explicitly learns the relationship between all modalities and all tumor sub-components. Single-modality network learns the relationship between single-modality and its highly correlated tumor subcomponents. Then, a dual recalibration module (DRM) is designed to connect the parallel single-modality network and multi-modality network at multiple stages. The function of the DRM is to unify the two types of features into the same feature space. Results: Experiments on BraTS 2015 dataset and BraTS 2018 dataset show that the proposed method is competitive and superior to other state-of -the-art methods. The proposed method achieved the segmentation results with Dice coefficients of 0.86 and Hausdorff distance of 4.82 on BraTS 2018 dataset, with dice coefficients of 0.80, positive predictive value of 0.76, and sensitivity of 0.78 on BraTS 2015 dataset. Conclusions: This work combines the manual labeling process of doctors and introduces the correlation between single-modality and the tumor subcomponents into the segmentation network. The method improves the segmentation performance of brain tumors and can be applied in the clinical practice. The code of the proposed method is available at: https://github.com/xiangQAQ/ MSFR-Net. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
227. Deep and statistical learning in biomedical imaging: State of the art in 3D MRI brain tumor segmentation.
- Author
-
Fernando, K. Ruwani M. and Tsokos, Chris P.
- Subjects
- *
STATISTICAL learning , *BRAIN tumors , *DEEP learning , *COMPUTER vision , *ARTIFICIAL intelligence , *COMPUTER-assisted image analysis (Medicine) , *MAGNETIC resonance imaging - Abstract
Clinical diagnosis and treatment decisions rely upon the integration of patient-specific data with clinical reasoning. Cancer presents a unique context that influences treatment decisions, given its diverse forms of disease evolution. Biomedical imaging allows non-invasive assessment of diseases based on visual evaluations, leading to better clinical outcome prediction and therapeutic planning. Early methods of brain cancer characterization predominantly relied upon the statistical modeling of neuroimaging data. Driven by breakthroughs in computer vision, deep learning has become the de facto standard in medical imaging. Integrated statistical and deep learning methods have recently emerged as a new direction in the automation of medical practice unifying multi-disciplinary knowledge in medicine, statistics, and artificial intelligence. In this study, we critically review major statistical, deep learning, and probabilistic deep learning models and their applications in brain imaging research with a focus on MRI-based brain tumor segmentation. These results highlight that model-driven classical statistics and data-driven deep learning is a potent combination for developing automated systems in clinical oncology. [Display omitted] • Deep learning and statistical models used in brain tumor segmentation are reviewed. • Probabilistic deep learning and applications of hybrid methods are summarized. • The review is conducted from both a theory-driven and application perspective. • Challenges and future directions in neuroimaging analysis are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
228. Encoder–Decoder Network with Depthwise Atrous Spatial Pyramid Pooling for Automatic Brain Tumor Segmentation.
- Author
-
AboElenein, Nagwa M., Piao, Songhao, and Zhang, Zhehong
- Subjects
BRAIN tumors ,PYRAMIDS ,CONVOLUTIONAL neural networks ,FEATURE extraction - Abstract
The accurate automated brain tumor segmentation is important for disease analysis and control and increases the likelihood of survival. However, it faces significant challenges due to the low contrast of the tissue boundary and the small size of the tumor. Convolutional Neural Networks are a common automated image evaluation technique that has greatly improved current state-of-the-art precision in the task of segmenting brain tumors. This paper presents an advanced Encoder–Decoder algorithm with Depthwise Atrous Spatial Pyramid Pooling Network (EDD-Net). Firstly, Dilated–ResNet block with Squeeze-and Excitation is introduced in the Encoder and decoder module to derive image features adaptively and focuses on the relevant characteristics of the brain segmentation task with fewer parameters. Then, the Depthwise atrous Spatial Pyramid Pooling(DSPP) technique is used as the transition and output layers of the network, to achieve the multi-scale extraction of the feature image and preserves more spatial information. Furthermore, to speed up learning we propose a down-sampling module, while the up-sampling module can more efficiently aggregate low- and high-level feature information. The proposed method is evaluated on the Brats 19 dataset. Experiments demonstrate that the EDD-Net provides high accuracy and robustness in small tumor segmentation. On the online validation set, the suggested ensemble achieved dice scores of 0.813, 0.873, and 0.866 for tumor enhancement, whole tumor enhancement, and tumor core enhancement, respectively, performing favorably compared with existing state-of-the-art architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
229. Brain Tumor Classification and Segmentation Using Dual-Outputs for U-Net Architecture: O2U-Net.
- Author
-
ZARGARI, Seyed Aman, KIA, Zahra Sadat, NICKFARJAM, Ali Mohammad, HIEBER, Daniel, and HOLL, Felix
- Abstract
We propose a modified version of the U-Net architecture for segmenting and classifying brain tumors, introducing another output between down- and upsampling. Our proposed architecture utilizes two outputs, adding a classification output beside the segmentation output. The central idea is to use fully connected layers to classify each image before applying U-Net’s up-sampling operations. This is achieved by utilizing the features extracted during the down-sampling procedure and combining them with fully connected layers for classification. Afterward, the segmented image is generated by U-Net’s up-sampling process. Initial tests show competitive results against comparable models with 80.83%, 99.34%, and 77.39% for the dice coefficient, accuracy, and sensitivity, respectively. The tests were conducted on the well-established dataset from Nanfang Hospital, Guangzhou, China, and General Hospital, Tianjin Medical University, China, from 2005 to 2010 containing MRI images of 3064 brain tumors. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
230. Improving brain tumor segmentation with anatomical prior-informed pre-training
- Author
-
Kang Wang, Zeyang Li, Haoran Wang, Siyu Liu, Mingyuan Pan, Manning Wang, Shuo Wang, and Zhijian Song
- Subjects
masked autoencoder ,anatomical priors ,transformer ,brain tumor segmentation ,magnetic resonance image ,self-supervised learning ,Medicine (General) ,R5-920 - Abstract
IntroductionPrecise delineation of glioblastoma in multi-parameter magnetic resonance images is pivotal for neurosurgery and subsequent treatment monitoring. Transformer models have shown promise in brain tumor segmentation, but their efficacy heavily depends on a substantial amount of annotated data. To address the scarcity of annotated data and improve model robustness, self-supervised learning methods using masked autoencoders have been devised. Nevertheless, these methods have not incorporated the anatomical priors of brain structures.MethodsThis study proposed an anatomical prior-informed masking strategy to enhance the pre-training of masked autoencoders, which combines data-driven reconstruction with anatomical knowledge. We investigate the likelihood of tumor presence in various brain structures, and this information is then utilized to guide the masking procedure.ResultsCompared with random masking, our method enables the pre-training to concentrate on regions that are more pertinent to downstream segmentation. Experiments conducted on the BraTS21 dataset demonstrate that our proposed method surpasses the performance of state-of-the-art self-supervised learning techniques. It enhances brain tumor segmentation in terms of both accuracy and data efficiency.DiscussionTailored mechanisms designed to extract valuable information from extensive data could enhance computational efficiency and performance, resulting in increased precision. It's still promising to integrate anatomical priors and vision approaches.
- Published
- 2023
- Full Text
- View/download PDF
231. Deep fusion of multi-modal features for brain tumor image segmentation
- Author
-
Guying Zhang, Jia Zhou, Guanghua He, and Hancan Zhu
- Subjects
Brain tumor segmentation ,Deep convolutional network ,Multi-modality fusion ,Deep residual learning ,Science (General) ,Q1-390 ,Social sciences (General) ,H1-99 - Abstract
Accurate segmentation of pathological regions in brain magnetic resonance images (MRI) is essential for the diagnosis and treatment of brain tumors. Multi-modality MRIs, which offer diverse feature information, are commonly utilized in brain tumor image segmentation. Deep neural networks have become prevalent in this field; however, many approaches simply concatenate different modalities and input them directly into the neural network for segmentation, disregarding the unique characteristics and complementarity of each modality. In this study, we propose a brain tumor image segmentation method that leverages deep residual learning with multi-modality image feature fusion. Our approach involves extracting and fusing distinct and complementary features from various modalities, fully exploiting the multi-modality information within a deep convolutional neural network to enhance the performance of brain tumor image segmentation. We evaluate the effectiveness of our proposed method using the BraTS2021 dataset and demonstrate that deep residual learning with multi-modality image feature fusion significantly improves segmentation accuracy. Our method achieves competitive segmentation results, with Dice values of 83.3, 89.07, and 91.44 for enhanced tumor, tumor core, and whole tumor, respectively. These findings highlight the potential of our method in improving brain tumor diagnosis and treatment through accurate segmentation of pathological regions in brain MRIs.
- Published
- 2023
- Full Text
- View/download PDF
232. An N-Shaped Lightweight Network with a Feature Pyramid and Hybrid Attention for Brain Tumor Segmentation
- Author
-
Mengxian Chi, Hong An, Xu Jin, and Zhenguo Nie
- Subjects
brain tumor segmentation ,CNNs ,feature pyramid ,lightweight model ,hybrid attention ,Science ,Astrophysics ,QB460-466 ,Physics ,QC1-999 - Abstract
Brain tumor segmentation using neural networks presents challenges in accurately capturing diverse tumor shapes and sizes while maintaining real-time performance. Additionally, addressing class imbalance is crucial for achieving accurate clinical results. To tackle these issues, this study proposes a novel N-shaped lightweight network that combines multiple feature pyramid paths and U-Net architectures. Furthermore, we ingeniously integrate hybrid attention mechanisms into various locations of depth-wise separable convolution module to improve efficiency, with channel attention found to be the most effective for skip connections in the proposed network. Moreover, we introduce a combination loss function that incorporates a newly designed weighted cross-entropy loss and dice loss to effectively tackle the issue of class imbalance. Extensive experiments are conducted on four publicly available datasets, i.e., UCSF-PDGM, BraTS 2021, BraTS 2019, and MSD Task 01 to evaluate the performance of different methods. The results demonstrate that the proposed network achieves superior segmentation accuracy compared to state-of-the-art methods. The proposed network not only improves the overall segmentation performance but also provides a favorable computational efficiency, making it a promising approach for clinical applications.
- Published
- 2024
- Full Text
- View/download PDF
233. Improved Brain Tumor Segmentation Using UNet-LSTM Architecture
- Author
-
Sowrirajan, Saran Raj, Karumanan Srinivasan, Logeshwaran, Kalluri, Anisha Devi, and Subburam, Ravi Kumar
- Published
- 2024
- Full Text
- View/download PDF
234. Brain Tumor Segmentation and Tractographic Feature Extraction from Structural MR Images for Overall Survival Prediction
- Author
-
Kao, Po-Yu, Ngo, Thuyen, Zhang, Angela, Chen, Jefferson W, and Manjunath, BS
- Subjects
Bioengineering ,Neurosciences ,Brain Disorders ,Cancer ,Neurological ,Brain tumor segmentation ,Brain parcellation ,Group normalization ,Hard negative mining ,Ensemble modeling ,Overall survival prediction ,Tractographic feature ,cs.CV ,cs.LG ,Artificial Intelligence & Image Processing - Abstract
This paper introduces a novel methodology to integrate human brain connectomics and parcellation for brain tumor segmentation and survival prediction. For segmentation, we utilize an existing brain parcellation atlas in the MNI152 1 mm space and map this parcellation to each individual subject data. We use deep neural network architectures together with hard negative mining to achieve the final voxel level classification. For survival prediction, we present a new method for combining features from connectomics data, brain parcellation information, and the brain tumor mask. We leverage the average connectome information from the Human Connectome Project and map each subject brain volume onto this common connectome space. From this, we compute tractographic features that describe potential neural disruptions due to the brain tumor. These features are then used to predict the overall survival of the subjects. The main novelty in the proposed methods is the use of normalized brain parcellation data and tractography data from the human connectome project for analyzing MR images for segmentation and survival prediction. Experimental results are reported on the BraTS2018 dataset.
- Published
- 2019
235. Improving Patch-Based Convolutional Neural Networks for MRI Brain Tumor Segmentation by Leveraging Location Information.
- Author
-
Kao, Po-Yu, Shailja, Shailja, Jiang, Jiaxiang, Zhang, Angela, Khan, Amil, Chen, Jefferson W, and Manjunath, BS
- Subjects
3D U-Net ,DeepMedic ,XGBoost ,brain parcellation atlas ,brain tumor segmentation ,convolutional neural network ,ensemble learning ,gliomas ,Neurosciences ,Psychology ,Cognitive Sciences - Abstract
The manual brain tumor annotation process is time consuming and resource consuming, therefore, an automated and accurate brain tumor segmentation tool is greatly in demand. In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation. This is motivated by the observation that lesions are not uniformly distributed across different brain parcellation regions and that a locality-sensitive segmentation is likely to obtain better segmentation accuracy. Toward this, we use an existing brain parcellation atlas in the Montreal Neurological Institute (MNI) space and map this atlas to the individual subject data. This mapped atlas in the subject data space is integrated with structural Magnetic Resonance (MR) imaging data, and patch-based neural networks, including 3D U-Net and DeepMedic, are trained to classify the different brain lesions. Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method. The first level reduces the uncertainty of the same type of models with different seed initializations, and the second level leverages the advantages of different types of neural network models. The proposed location information fusion method improves the segmentation performance of state-of-the-art networks including 3D U-Net and DeepMedic. Our proposed ensemble also achieves better segmentation performance compared to the state-of-the-art networks in BraTS 2017 and rivals state-of-the-art networks in BraTS 2018. Detailed results are provided on the public multimodal brain tumor segmentation (BraTS) benchmarks.
- Published
- 2019
236. Brain tumor segmentation and tractographic feature extraction from structural MR images for overall survival prediction
- Author
-
Kao, PY, Ngo, T, Zhang, A, Chen, JW, and Manjunath, BS
- Subjects
Brain tumor segmentation ,Brain parcellation ,Group normalization ,Hard negative mining ,Ensemble modeling ,Overall survival prediction ,Tractographic feature ,Cancer ,Neurosciences ,Bioengineering ,Brain Disorders ,Neurological ,cs.CV ,cs.LG ,Artificial Intelligence & Image Processing - Abstract
This paper introduces a novel methodology to integrate human brain connectomics and parcellation for brain tumor segmentation and survival prediction. For segmentation, we utilize an existing brain parcellation atlas in the MNI152 1 mm space and map this parcellation to each individual subject data. We use deep neural network architectures together with hard negative mining to achieve the final voxel level classification. For survival prediction, we present a new method for combining features from connectomics data, brain parcellation information, and the brain tumor mask. We leverage the average connectome information from the Human Connectome Project and map each subject brain volume onto this common connectome space. From this, we compute tractographic features that describe potential neural disruptions due to the brain tumor. These features are then used to predict the overall survival of the subjects. The main novelty in the proposed methods is the use of normalized brain parcellation data and tractography data from the human connectome project for analyzing MR images for segmentation and survival prediction. Experimental results are reported on the BraTS2018 dataset.
- Published
- 2019
237. U-Net architecture variants for brain tumor segmentation of histogram corrected images
- Author
-
Lefkovits Szidónia and Lefkovits László
- Subjects
brain tumor segmentation ,histogram uniformization ,u-net ,vgg16-unet ,resnet50-unet ,brats2020 ,68t07 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In this paper we propose to create an end-to-end brain tumor segmentation system that applies three variants of the well-known U-Net convolutional neural networks. In our results we obtain and analyse the detection performances of U-Net, VGG16-UNet and ResNet-UNet on the BraTS2020 training dataset. Further, we inspect the behavior of the ensemble model obtained as the weighted response of the three CNN models. We introduce essential preprocessing and post-processing steps so as to improve the detection performances. The original images were corrected and the different intensity ranges were transformed into the 8-bit grayscale domain to uniformize the tissue intensities, while preserving the original histogram shapes. For post-processing we apply region connectedness onto the whole tumor and conversion of background pixels into necrosis inside the whole tumor. As a result, we present the Dice scores of our system obtained for WT (whole tumor), TC (tumor core) and ET (enhanced tumor) on the BraTS2020 training dataset.
- Published
- 2022
- Full Text
- View/download PDF
238. Deep learning based brain tumor segmentation: a survey
- Author
-
Zhihua Liu, Lei Tong, Long Chen, Zheheng Jiang, Feixiang Zhou, Qianni Zhang, Xiangrong Zhang, Yaochu Jin, and Huiyu Zhou
- Subjects
Brain tumor segmentation ,Deep learning ,Neural networks ,Network design ,Data imbalance ,Multi-modalities ,Electronic computers. Computer science ,QA75.5-76.95 ,Information technology ,T58.5-58.64 - Abstract
Abstract Brain tumor segmentation is one of the most challenging problems in medical image analysis. The goal of brain tumor segmentation is to generate accurate delineation of brain tumor regions. In recent years, deep learning methods have shown promising performance in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of deep learning based methods have been applied to brain tumor segmentation and achieved promising results. Considering the remarkable breakthroughs made by state-of-the-art technologies, we provide this survey with a comprehensive study of recently developed deep learning based brain tumor segmentation techniques. More than 150 scientific papers are selected and discussed in this survey, extensively covering technical aspects such as network architecture design, segmentation under imbalanced conditions, and multi-modality processes. We also provide insightful discussions for future development directions.
- Published
- 2022
- Full Text
- View/download PDF
239. Vision transformers in multi-modal brain tumor MRI segmentation: A review
- Author
-
Pengyu Wang, Qiushi Yang, Zhibin He, and Yixuan Yuan
- Subjects
Brain tumor segmentation ,Multi-modal MRI ,Vision transformer ,Deep learning ,Medical physics. Medical radiology. Nuclear medicine ,R895-920 - Abstract
Brain tumors have shown extreme mortality and increasing incidence during recent years, which bring enormous challenges for the timely diagnosis and effective treatment of brain tumors. Concretely, accurate brain tumor segmentation on multi-modal Magnetic Resonance Imaging (MRI) is essential and important since most normal tissues are unresectable in brain tumor surgery. In the past decade, with the explosive development of artificial intelligence technologies, a series of deep learning-based methods are presented for brain tumor segmentation and achieved excellent performance. Among them, vision transformers with non-local receptive fields show superior performance compared with the classical Convolutional Neural Networks (CNNs). In this review, we focus on the representative transformer-based works for brain tumor segmentation proposed in the last three years. Firstly, this review divides these transformer-based methods as the pure transformer methods and the hybrid transformer methods according to their transformer architectures. Then, we summarize the corresponding theoretical innovations, implementation schemes and superiorities to help readers better understand state-of-the-art transformer-based brain tumor segmentation methods. After that, we introduce the most commonly-used Brain Tumor Segmentation (BraTS) datasets, and comprehensively analyze and compare the performance of existing methods through multiple quantitative statistics. Finally, we discuss the current research challenges and describe the future research trends.
- Published
- 2023
- Full Text
- View/download PDF
240. Analysis of depth variation of U-NET architecture for brain tumor segmentation.
- Author
-
Jena, Biswajit, Jain, Sarthak, Nayak, Gopal Krishna, and Saxena, Sanjay
- Subjects
BRAIN tumors ,CONVOLUTIONAL neural networks ,ARCHITECTURAL design ,GLIOMAS - Abstract
U-NET is a fully convolutional network (FCN) architecture designed to research the segmentation of biomedical images. The depth of the U-NET is one of the major constraints of this model while computing the performances. The larger depth of the U-NET means that its computational complexity is high as well. In certain cases, this large depth, as in the original model, is not justified for biomedical imaging modalities. In this paper, we have done an efficient analysis of U-NET architecture's depth variation, i.e., after removing different layers. For the analysis, the datasets BraTS-2017 and BraTS-2019, which consist of High-Grade Glioma (HGG) and Low-Grade Glioma (LGG) MR Scans, have been used for tumor segmentation. We have achieved a dice coefficient of at least 0.8866 and as high as 0.8887 on the discovery cohort, and at least 0.8895 and as high as 0.8911 cross-validation replication cohort. The results show that there are the least significant changes occurring in the performance parameters while moving from the higher to the lower depth of the model. Hence, in this paper, we presented that the large depth of U-NET, which costs more in terms of computational complexity, is not always required. Moreover, the U-NET models with depth reduction, which decreases the computational complexity, can achieve nearly the same results as in the case of the original U-NET. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
241. Efficient U-Net Architecture with Multiple Encoders and Attention Mechanism Decoders for Brain Tumor Segmentation.
- Author
-
Aboussaleh, Ilyasse, Riffi, Jamal, Fazazy, Khalid El, Mahraz, Mohamed Adnane, and Tairi, Hamid
- Subjects
- *
BRAIN tumors , *IMAGE segmentation , *DEEP learning , *BRAIN cancer , *CAUSES of death - Abstract
The brain is the center of human control and communication. Hence, it is very important to protect it and provide ideal conditions for it to function. Brain cancer remains one of the leading causes of death in the world, and the detection of malignant brain tumors is a priority in medical image segmentation. The brain tumor segmentation task aims to identify the pixels that belong to the abnormal areas when compared to normal tissue. Deep learning has shown in recent years its power to solve this problem, especially the U-Net-like architectures. In this paper, we proposed an efficient U-Net architecture with three different encoders: VGG-19, ResNet50, and MobileNetV2. This is based on transfer learning followed by a bidirectional features pyramid network applied to each encoder to obtain more spatial pertinent features. Then, we fused the feature maps extracted from the output of each network and merged them into our decoder with an attention mechanism. The method was evaluated on the BraTS 2020 dataset to segment the different types of tumors and the results show a good performance in terms of dice similarity, with coefficients of 0.8741, 0.8069, and 0.7033 for the whole tumor, core tumor, and enhancing tumor, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
242. Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI.
- Author
-
Zhu, Zhiqin, He, Xianyu, Qi, Guanqiu, Li, Yuanyuan, Cong, Baisen, and Liu, Yu
- Subjects
- *
BRAIN tumors , *CONVOLUTIONAL neural networks , *MAGNETIC resonance imaging , *MATHEMATICAL convolutions , *SEMANTICS , *SOURCE code - Abstract
Brain tumor segmentation in multimodal MRI has great significance in clinical diagnosis and treatment. The utilization of multimodal information plays a crucial role in brain tumor segmentation. However, most existing methods focus on the extraction and selection of deep semantic features, while ignoring some features with specific meaning and importance to the segmentation problem. In this paper, we propose a brain tumor segmentation method based on the fusion of deep semantics and edge information in multimodal MRI, aiming to achieve a more sufficient utilization of multimodal information for accurate segmentation. The proposed method mainly consists of a semantic segmentation module, an edge detection module and a feature fusion module. In the semantic segmentation module, the Swin Transformer is adopted to extract semantic features and a shifted patch tokenization strategy is introduced for better training. The edge detection module is designed based on convolutional neural networks (CNNs) and an edge spatial attention block (ESAB) is presented for feature enhancement. The feature fusion module aims to fuse the extracted semantic and edge features, and we design a multi-feature inference block (MFIB) based on graph convolution to perform feature reasoning and information dissemination for effective feature fusion. The proposed method is validated on the popular BraTS benchmarks. The experimental results verify that the proposed method outperforms a number of state-of-the-art brain tumor segmentation methods. The source code of the proposed method is available at https://github.com/HXY-99/brats. • Proposes a brain tumor segmentation method by fusing semantic and edge features. • Presents a Swin Transformer-based semantic segmentation module with an SPD strategy. • Presents a CNN-based edge detection module and an edge spatial attention block. • Presents a graph convolution-based multi-feature inference block for feature fusion. • Achieves very promising results on the popular BraTS benchmarks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
243. Intraoperative thermal infrared imaging in neurosurgery: machine learning approaches for advanced segmentation of tumors.
- Author
-
Cardone, Daniela, Trevisi, Gianluca, Perpetuini, David, Filippini, Chiara, Merla, Arcangelo, and Mangiola, Annunziato
- Abstract
Surgical resection is one of the most relevant practices in neurosurgery. Finding the correct surgical extent of the tumor is a key question and so far several techniques have been employed to assist the neurosurgeon in preserving the maximum amount of healthy tissue. Some of these methods are invasive for patients, not always allowing high precision in the detection of the tumor area. The aim of this study is to overcome these limitations, developing machine learning based models, relying on features obtained from a contactless and non-invasive technique, the thermal infrared (IR) imaging. The thermal IR videos of thirteen patients with heterogeneous tumors were recorded in the intraoperative context. Time (TD)- and frequency (FD)-domain features were extracted and fed different machine learning models. Models relying on FD features have proven to be the best solutions for the optimal detection of the tumor area (Average Accuracy = 90.45%; Average Sensitivity = 84.64%; Average Specificity = 93,74%). The obtained results highlight the possibility to accurately detect the tumor lesion boundary with a completely non-invasive, contactless, and portable technology, revealing thermal IR imaging as a very promising tool for the neurosurgeon. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
244. Brain Tumor Segmentation and Classification from Sensor-Based Portable Microwave Brain Imaging System Using Lightweight Deep Learning Models.
- Author
-
Hossain, Amran, Islam, Mohammad Tariqul, Rahman, Tawsifur, Chowdhury, Muhammad E. H., Tahir, Anas, Kiranyaz, Serkan, Mat, Kamarulzaman, Beng, Gan Kok, and Soliman, Mohamed S.
- Subjects
DEEP learning ,MICROWAVE imaging ,IMAGING systems ,BRAIN tumors ,BRAIN imaging ,TUMOR classification ,MICROWAVE reflectometry ,IMAGE segmentation - Abstract
Automated brain tumor segmentation from reconstructed microwave (RMW) brain images and image classification is essential for the investigation and monitoring of the progression of brain disease. The manual detection, classification, and segmentation of tumors are extremely time-consuming but crucial tasks due to the tumor's pattern. In this paper, we propose a new lightweight segmentation model called MicrowaveSegNet (MSegNet), which segments the brain tumor, and a new classifier called the BrainImageNet (BINet) model to classify the RMW images. Initially, three hundred (300) RMW brain image samples were obtained from our sensors-based microwave brain imaging (SMBI) system to create an original dataset. Then, image preprocessing and augmentation techniques were applied to make 6000 training images per fold for a 5-fold cross-validation. Later, the MSegNet and BINet were compared to state-of-the-art segmentation and classification models to verify their performance. The MSegNet has achieved an Intersection-over-Union (IoU) and Dice score of 86.92% and 93.10%, respectively, for tumor segmentation. The BINet has achieved an accuracy, precision, recall, F1-score, and specificity of 89.33%, 88.74%, 88.67%, 88.61%, and 94.33%, respectively, for three-class classification using raw RMW images, whereas it achieved 98.33%, 98.35%, 98.33%, 98.33%, and 99.17%, respectively, for segmented RMW images. Therefore, the proposed cascaded model can be used in the SMBI system. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
245. Salp Swarm Algorithm with Multilevel Thresholding Based Brain Tumor Segmentation Model.
- Author
-
Halawani, Hanan T.
- Subjects
BRAIN tumors ,COMPUTER-aided diagnosis ,THRESHOLDING algorithms ,MAGNETIC resonance imaging ,IMAGE processing ,IMAGE segmentation - Abstract
Biomedical image processing acts as an essential part of several medical applications in supporting computer aided disease diagnosis. Magnetic Resonance Image (MRI) is a commonly utilized imaging tool used to save glioma for clinical examination. Biomedical image segmentation plays a vital role in healthcare decision making process which also helps to identify the affected regions in the MRI. Though numerous segmentation models are available in the literature, it is still needed to develop effective segmentation models for BT. This study develops a salp swarm algorithm with multi-level thresholding based brain tumor segmentation (SSAMLT-BTS) model. The presented SSAMLT-BTS model initially employs bilateral filtering based on noise removal and skull stripping as a pre-processing phase. In addition, Otsu thresholding approach is applied to segment the biomedical images and the optimum threshold values are chosen by the use of SSA. Finally, active contour (AC) technique is used to identify the suspicious regions in the medical image. A comprehensive experimental analysis of the SSAMLT-BTS model is performed using benchmark dataset and the outcomes are inspected in many aspects. The simulation outcomes reported the improved outcomes of the SSAMLT-BTS model over recent approaches with maximum accuracy of 95.95%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
246. Selective Deeply Supervised Multi-Scale Attention Network for Brain Tumor Segmentation.
- Author
-
Rehman, Azka, Usman, Muhammad, Shahid, Abdullah, Latif, Siddique, and Qadir, Junaid
- Subjects
- *
BRAIN tumors , *HUMAN error , *IMAGE segmentation , *CELL proliferation , *ATTENTION - Abstract
Brain tumors are among the deadliest forms of cancer, characterized by abnormal proliferation of brain cells. While early identification of brain tumors can greatly aid in their therapy, the process of manual segmentation performed by expert doctors, which is often time-consuming, tedious, and prone to human error, can act as a bottleneck in the diagnostic process. This motivates the development of automated algorithms for brain tumor segmentation. However, accurately segmenting the enhanced and core tumor regions is complicated due to high levels of inter- and intra-tumor heterogeneity in terms of texture, morphology, and shape. This study proposes a fully automatic method called the selective deeply supervised multi-scale attention network (SDS-MSA-Net) for segmenting brain tumor regions using a multi-scale attention network with novel selective deep supervision (SDS) mechanisms for training. The method utilizes a 3D input composed of five consecutive slices, in addition to a 2D slice, to maintain sequential information. The proposed multi-scale architecture includes two encoding units to extract meaningful global and local features from the 3D and 2D inputs, respectively. These coarse features are then passed through attention units to filter out redundant information by assigning lower weights. The refined features are fed into a decoder block, which upscales the features at various levels while learning patterns relevant to all tumor regions. The SDS block is introduced to immediately upscale features from intermediate layers of the decoder, with the aim of producing segmentations of the whole, enhanced, and core tumor regions. The proposed framework was evaluated on the BraTS2020 dataset and showed improved performance in brain tumor region segmentation, particularly in the segmentation of the core and enhancing tumor regions, demonstrating the effectiveness of the proposed approach. Our code is publicly available. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
247. Deep mutual learning for brain tumor segmentation with the fusion network.
- Author
-
Gao, Huan, Miao, Qiguang, Ma, Daikai, and Liu, Ruyi
- Subjects
- *
BRAIN tumors , *DEEP learning , *LOGITS , *LEARNING strategies , *COGNITIVE training , *DECODING algorithms - Abstract
• This paper introduces the mutual learning strategy train the brain tumor segmentation network, using the shallowest feature map to supervise the subsequent feature map of the network. using the deepest logits to supervise the previous shallow network's logits. The shallow feature map and deep logit supervise mutually and improve the accuracy of tumor sub-region segmentation. • This paper introduces the depth supervision to train this network, using the prediction of each up-sample layer is to deep supervise the training process for enlarging the receptive field to improve the overall segmentation accuracy. • A large number of experiments on BraTS dataset show that our method can effectively improve the accuracy of brain tumor segmentation and achieve the performance of SOTA. Deep learning methods have been successfully applied to Brain tumor segmentation. However, the extreme data imbalance exists in the different sub-regions of tumor, results in training the deep learning methods on these data will reduce the accuracy of segmentation. We introduce the deep mutual learning strategy to address the challenges, the proposed integrates transformer layers in both encoder and decoder of a U-Net architecture. In the network, using the prediction of up-sampled layer is to deep supervise the training process for enlarging the receptive field to extract features, the feature map of the shallowest layer supervises the subsequent feature map of layers to keep more edge information to guide the sub-region segmentation accuracy. the classification logits of the deepest layer supervise the previous layer of logits to get more semantic information for distinguish of tumor sub-regions. Furthermore, the feature map and the classification logits supervise mutually to improve the overall segmentation accuracy. The experimental results on benchmark dataset shows that our method has significant performance gain over existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
248. Deep learning based brain tumor segmentation: a survey.
- Author
-
Liu, Zhihua, Tong, Lei, Chen, Long, Jiang, Zheheng, Zhou, Feixiang, Zhang, Qianni, Zhang, Xiangrong, Jin, Yaochu, and Zhou, Huiyu
- Subjects
BRAIN tumors ,DEEP learning ,OBJECT recognition (Computer vision) ,COMPUTER vision ,ARCHITECTURAL design ,IMAGE analysis - Abstract
Brain tumor segmentation is one of the most challenging problems in medical image analysis. The goal of brain tumor segmentation is to generate accurate delineation of brain tumor regions. In recent years, deep learning methods have shown promising performance in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of deep learning based methods have been applied to brain tumor segmentation and achieved promising results. Considering the remarkable breakthroughs made by state-of-the-art technologies, we provide this survey with a comprehensive study of recently developed deep learning based brain tumor segmentation techniques. More than 150 scientific papers are selected and discussed in this survey, extensively covering technical aspects such as network architecture design, segmentation under imbalanced conditions, and multi-modality processes. We also provide insightful discussions for future development directions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
249. HMNet: Hierarchical Multi-Scale Brain Tumor Segmentation Network.
- Author
-
Zhang, Ruifeng, Jia, Shasha, Adamu, Mohammed Jajere, Nie, Weizhi, Li, Qiang, and Wu, Ting
- Subjects
- *
BRAIN tumors , *CONVOLUTIONAL neural networks - Abstract
An accurate and efficient automatic brain tumor segmentation algorithm is important for clinical practice. In recent years, there has been much interest in automatic segmentation algorithms that use convolutional neural networks. In this paper, we propose a novel hierarchical multi-scale segmentation network (HMNet), which contains a high-resolution branch and parallel multi-resolution branches. The high-resolution branch can keep track of the brain tumor's spatial details, and the multi-resolution feature exchange and fusion allow the network's receptive fields to adapt to brain tumors of different shapes and sizes. In particular, to overcome the large computational overhead caused by expensive 3D convolution, we propose a lightweight conditional channel weighting block to reduce GPU memory and improve the efficiency of HMNet. We also propose a lightweight multi-resolution feature fusion (LMRF) module to further reduce model complexity and reduce the redundancy of the feature maps. We run tests on the BraTS 2020 dataset to determine how well the proposed network would work. The dice similarity coefficients of HMNet for ET, WT, and TC are 0.781, 0.901, and 0.823, respectively. Many comparative experiments on the BraTS 2020 dataset and other two datasets show that our proposed HMNet has achieved satisfactory performance compared with the SOTA approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
250. Axial Attention Convolutional Neural Network for Brain Tumor Segmentation with Multi-Modality MRI Scans.
- Author
-
Tian, Weiwei, Li, Dengwang, Lv, Mengyu, and Huang, Pu
- Subjects
- *
CONVOLUTIONAL neural networks , *BRAIN tumors , *MEDICAL imaging systems , *MAGNETIC resonance imaging , *COMPUTATIONAL complexity - Abstract
Accurately identifying tumors from MRI scans is of the utmost importance for clinical diagnostics and when making plans regarding brain tumor treatment. However, manual segmentation is a challenging and time-consuming process in practice and exhibits a high degree of variability between doctors. Therefore, an axial attention brain tumor segmentation network was established in this paper, automatically segmenting tumor subregions from multi-modality MRIs. The axial attention mechanism was employed to capture richer semantic information, which makes it easier for models to provide local–global contextual information by incorporating local and global feature representations while simplifying the computational complexity. The deep supervision mechanism is employed to avoid vanishing gradients and guide the AABTS-Net to generate better feature representations. The hybrid loss is employed in the model to handle the class imbalance of the dataset. Furthermore, we conduct comprehensive experiments on the BraTS 2019 and 2020 datasets. The proposed AABTS-Net shows greater robustness and accuracy, which signifies that the model can be employed in clinical practice and provides a new avenue for medical image segmentation systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.