16 results on '"biomedical segmentation"'
Search Results
2. New lesion segmentation for multiple sclerosis brain images with imaging and lesion-aware augmentation.
- Author
-
Basaran, Berke Doga, Matthews, Paul M., and Wenjia Bai
- Subjects
MULTIPLE sclerosis ,CENTRAL nervous system diseases ,DEMYELINATION ,BRAIN imaging ,DATA augmentation - Abstract
Multiple sclerosis (MS) is an inflammatory and demyelinating neurological disease of the central nervous system. Image-based biomarkers, such as lesions defined on magnetic resonance imaging (MRI), play an important role in MS diagnosis and patient monitoring. The detection of newly formed lesions provides crucial information for assessing disease progression and treatment outcome. Here, we propose a deep learning-based pipeline for new MS lesion detection and segmentation, which is built upon the nnU-Net framework. In addition to conventional data augmentation, we employ imaging and lesion-aware data augmentation methods, axial subsampling and CarveMix, to generate diverse samples and improve segmentation performance. The proposed pipeline is evaluated on the MICCAI 2021 MS new lesion segmentation challenge (MSSEG-2) dataset. It achieves an average Dice score of 0.510 and F
1 score of 0.552 on cases with new lesions, and an average false positive lesion number nFP of 0.036 and false positive lesion volume VFP of 0.192 mm³ on cases with no new lesions. Our method outperforms other participating methods in the challenge and several state-of-the-art network architectures. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
3. From Shallow to Deep: Exploiting Feature-Based Classifiers for Domain Adaptation in Semantic Segmentation
- Author
-
Alex Matskevych, Adrian Wolny, Constantin Pape, and Anna Kreshuk
- Subjects
microscopy segmentation ,domain adaptation ,deep learning ,transfer learning ,biomedical segmentation ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The remarkable performance of Convolutional Neural Networks on image segmentation tasks comes at the cost of a large amount of pixelwise annotated images that have to be segmented for training. In contrast, feature-based learning methods, such as the Random Forest, require little training data, but rarely reach the segmentation accuracy of CNNs. This work bridges the two approaches in a transfer learning setting. We show that a CNN can be trained to correct the errors of the Random Forest in the source domain and then be applied to correct such errors in the target domain without retraining, as the domain shift between the Random Forest predictions is much smaller than between the raw data. By leveraging a few brushstrokes as annotations in the target domain, the method can deliver segmentations that are sufficiently accurate to act as pseudo-labels for target-domain CNN training. We demonstrate the performance of the method on several datasets with the challenging tasks of mitochondria, membrane and nuclear segmentation. It yields excellent performance compared to microscopy domain adaptation baselines, especially when a significant domain shift is involved.
- Published
- 2022
- Full Text
- View/download PDF
4. Exploring UMAP in hybrid models of entropy-based and representativeness sampling for active learning in biomedical segmentation.
- Author
-
Tan HS, Wang K, and Mcbeth R
- Subjects
- Humans, Male, Deep Learning, Prostate diagnostic imaging, Image Processing, Computer-Assisted methods, Supervised Machine Learning, Heart, Entropy
- Abstract
In this work, we study various hybrid models of entropy-based and representativeness sampling techniques in the context of active learning in medical segmentation, in particular examining the role of UMAP (Uniform Manifold Approximation and Projection) as a technique for capturing representativeness. Although UMAP has been shown viable as a general purpose dimension reduction method in diverse areas, its role in deep learning-based medical segmentation has yet been extensively explored. Using the cardiac and prostate datasets in the Medical Segmentation Decathlon for validation, we found that a novel hybrid combination of Entropy-UMAP sampling technique achieved a statistically significant Dice score advantage over the random baseline (3.2% for cardiac, 4.5% for prostate), and attained the highest Dice coefficient among the spectrum of 10 distinct active learning methodologies we examined. This provides preliminary evidence that there is an interesting synergy between entropy-based and UMAP methods when the former precedes the latter in a hybrid model of active learning., Competing Interests: Declaration of competing interest None Declared, (Copyright © 2024 Elsevier Ltd. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
5. FRNet: an end-to-end feature refinement neural network for medical image segmentation.
- Author
-
Wang, Dan, Hu, Guoqing, and Lyu, Chengzhi
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *IMAGE segmentation , *COMPUTER-aided diagnosis , *DIAGNOSTIC imaging , *BLOOD vessels - Abstract
Medical image segmentation is a crucial but challenging task for computer-aided diagnosis. In recent years, fully convolutional network-based methods have been widely applied to medical image segmentation. U-shape-based approaches are one of the most successful structures in this medical field. However, the consecutive down-sampling operations in the encoder lead to the loss of spatial information, which is important for medical image segmentation. In this paper, we present a novel lightweight end-to-end feature refinement network (FRNet) to address this issue. The structure of our model is simple and efficient. Specifically, the network adopts an encoder-decoder network as backbone, where two additional paths, spatial refinement path and semantic refinement path, are applied on the encoder and decoder, respectively, to improve the detailed representation ability and discriminative ability of our model. In addition, we introduce a feature adaptive fusion block (FAF block) that effectively combines features of different depths. The proposed FRNet can be trained in an end-to-end way. We have evaluated our method on three different medical image segmentation tasks. Experimental results show that FRNet has better performance than the state-of-the-art approaches. It achieves a high average accuracy without any post-processing of 0.968 and 0.936 for blood vessel segmentation and skin lesion segmentation, respectively. We further demonstrate that our method can be easily applied to other network structures to improve their performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Siamese semi-disentanglement network for robust PET-CT segmentation.
- Author
-
Diao, Zhaoshuo, Jiang, Huiyan, Shi, Tianyu, and Yao, Yu-Dong
- Subjects
- *
POSITRON emission tomography computed tomography , *GENERATIVE adversarial networks , *COMPUTED tomography , *IMAGE segmentation , *POSITRON emission tomography - Abstract
A robust PET-CT segmentation network should guarantee that models trained on the PET-CT images will still work when only CT images are available. It is particularly important due to the radioactivity and expensive cost of PET imaging, in many cases only CT images can be obtained. Disentanglement and Generative Adversarial Networks (GAN) are two commonly used strategies to deal with the missing modality. Disentanglement methods cannot successfully disentangle PET-CT images into modal features and anatomical features because PET-CT images do not satisfy anatomical information consistency constraints. GAN networks tend to ignore information that is critical for downstream tasks, such as tumor information. To address above issues, we propose a siamese semi-disentanglement network. We extract high-level shared tumor features from PET images and CT images instead of anatomical features for downstream segmentation tasks. Meanwhile, in order to leverage low-level entanglement features during segmentation, GAN is used to generate synthetic PET images from CT images. Siamese Consistency Module (SCM) is proposed to ensure that the entanglement low-level features of the synthetic PET images are consistent with the real PET images. The motivation of our proposed method is that the entanglement information discarded by the semi-disentanglement is compensated by GAN to get rid of the anatomical information consistency constraints. Also, the GAN can better retain tumor information through semi-disentanglement. We do experiments on two public PET-CT datasets and one private dataset: Soft-Tissue-Sarcoma (STS) dataset, HeadNeck dataset and LiverTumor dataset. The results show that our proposed method can successfully achieve robust PET-CT segmentation. Our proposed method outperforms other disentanglement methods and generative networks in the absence of PET modality. In the inference stage, with missing PET images, using the siamese semi-disentanglement network proposed in this paper can achieve comparable results to the full modal segmentation. • A novelty disentanglement strategy for robust PET-CT segmentation is proposed. • Semi-disentanglement images into shared tumor features instead of anatomical features. • The GAN can better retain tumor information through semi-disentanglement. • The performance of proposed is better than disentanglement and GAN methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. New lesion segmentation for multiple sclerosis brain images with imaging and lesion-aware augmentation
- Author
-
Berke Doga Basaran, Paul M. Matthews, Wenjia Bai, and UK DRI Ltd
- Subjects
biomedical segmentation ,longitudinal lesion segmentation ,nnU-Net ,1701 Psychology ,General Neuroscience ,1702 Cognitive Sciences ,new lesion detection ,multiple sclerosis ,1109 Neurosciences ,MRI ,data augmentation - Abstract
Multiple sclerosis (MS) is an inflammatory and demyelinating neurological disease of the central nervous system. Image-based biomarkers, such as lesions defined on magnetic resonance imaging (MRI), play an important role in MS diagnosis and patient monitoring. The detection of newly formed lesions provides crucial information for assessing disease progression and treatment outcome. Here, we propose a deep learning-based pipeline for new MS lesion detection and segmentation, which is built upon the nnU-Net framework. In addition to conventional data augmentation, we employ imaging and lesion-aware data augmentation methods, axial subsampling and CarveMix, to generate diverse samples and improve segmentation performance. The proposed pipeline is evaluated on the MICCAI 2021 MS new lesion segmentation challenge (MSSEG-2) dataset. It achieves an average Dice score of 0.510 and F1 score of 0.552 on cases with new lesions, and an average false positive lesion number nFP of 0.036 and false positive lesion volume VFP of 0.192 mm3 on cases with no new lesions. Our method outperforms other participating methods in the challenge and several state-of-the-art network architectures.
- Published
- 2022
8. A spatial squeeze and multimodal feature fusion attention network for multiple tumor segmentation from PET–CT Volumes.
- Author
-
Diao, Zhaoshuo, Jiang, Huiyan, and Shi, Tianyu
- Subjects
- *
POSITRON emission tomography computed tomography , *MULTIPLE tumors , *MULTIMODAL user interfaces , *COMPUTER-aided diagnosis , *POSITRON emission tomography , *COMPUTED tomography , *SQUEEZED light - Abstract
Tumor segmentation is a key step in computer-aided diagnosis. The PET–CT co-segmentation method combines the high sensitivity of PET images and the anatomical information of CT images. For whole-body multiple tumors, such as soft tissue sarcoma, lymphoma, etc., due to the different lesion location and size, it is necessary to segment the tumor area according to the whole body anatomical information. How to effectively leverage whole-body contextual information and the fusion of multimodal information is the key to the problem. To address this issue, we propose a spatial squeeze and multimodal feature fusion attention network for whole-body multiple tumors segmentation based on PET–CT volumes. Our proposed method consists of two parts, a Coronal-Spatial Squeeze Attention Extraction Network (CSAE-Net) and a Precise PET–CT Fusion Attention Segmentation Network (PFAS-Net), respectively. In CSAE-Net, we squeeze a 3D PET–CT volume along the coronal plane into m 2D images, and obtain 3D Coronal Spatial Squeeze Attention Volume based on these 2D images. In PFAS-Net, the input is a 2D axial PET–CT slice, and the previously obtained coronal spatial squeeze attention map is used to guide the segmentation. Moreover, a Multimodal Fusion Attention (MFA) module is proposed to fuse the metabolic information of PET and the anatomical information of CT. We perform experiments on PET–CT datasets of two whole-body multiple tumors, Soft Tissue Sarcoma (STS) and Lymphoma. The results show that our proposed method improved Dice values by 8.03% in STS and 1.74% in Lymphoma. Also the visualization results show that our proposed method is able to suppress high-uptake regions of normal tissues. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. A unified uncertainty network for tumor segmentation using uncertainty cross entropy loss and prototype similarity.
- Author
-
Diao, Zhaoshuo, Jiang, Huiyan, and Shi, Tianyu
- Subjects
- *
ENTROPY , *SIGNAL convolution , *PROTOTYPES , *TUMORS - Abstract
Uncertainty estimation and out-of-distribution (OOD) detection are topics of practical significance in deep convolutional neural network-based tumor segmentation tasks. We propose a unified uncertainty segmentation network to handle these two tasks with a single network. The uncertainty cross entropy loss is proposed to guide the network to directly output the prediction uncertainty of each pixel instead of executing several times in the prediction phase. We extend the uncertainty to the case level to address OOD sample detection problems. Case-level uncertainty is estimated based on prototype similarity. We perform pixel-level uncertainty experiments and OOD detection experiments on four datasets. The experimental results show that our proposed method is more suitable for uncertainty estimation in tumor segmentation than existing methods. Our proposed method only needs to modify the network output and loss function and does not need to execute the network multiple times when estimating uncertainty. Moreover, our method shows improved performance in tumor segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Comparison of UNet, ENet, and BoxENet for Segmentation of Mast Cells in Scans of Histological Slices
- Author
-
Karimov, A., Razumov, A., Manbatchurina, R., Simonova, K., Donets, I., Vlasova, A., Khramtsova, Y., Ushenin, K., Karimov, A., Razumov, A., Manbatchurina, R., Simonova, K., Donets, I., Vlasova, A., Khramtsova, Y., and Ushenin, K.
- Abstract
Deep neural networks show high accuracy in the problem of semantic and instance segmentation of biomedical data. However, this approach is computationally expensive. The computational cost may be reduced with network simplification after training or choosing the proper architecture, which provides segmentation with less accuracy but does it much faster. In the present study, we analyzed the accuracy and performance of UNet and ENet architectures for the problem of semantic image segmentation. In addition, we investigated the ENet architecture by replacing of some convolution layers with box-convolution layers. The analysis performed on the original dataset consisted of histology slices with mast cells. These cells provide a region for segmentation with different types of borders, which vary from clearly visible to ragged. ENet was less accurate than UNet by only about 1-2%, but ENet performance was 8-15 times faster than UNet one. © 2019 IEEE.
- Published
- 2019
11. Erratum: EFNet: evidence fusion network for tumor segmentation from PET-CT volumes (2021 Phys. Med. Biol. 66 205005).
- Author
-
Diao, Zhaoshuo, Jiang, Huiyan, Han, Xian-Hua, Yao, Yu-Dong, and Shi, Tianyu
- Subjects
- *
TUMORS - Published
- 2021
- Full Text
- View/download PDF
12. Comparison of UNet, ENet, and BoxENet for Segmentation of Mast Cells in Scans of Histological Slices
- Author
-
Anastasia Vlasova, Artem Razumov, Yulia Khramtsova, Ruslana Manbatchurina, Alexander Karimov, Irina Donets, Ksenia Simonova, and Konstantin Ushenin
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,CONVOLUTION ,MULTILAYER NEURAL NETWORKS ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,BOX CONVOLUTION LAYER ,NEURAL NETWORKS ,Machine Learning (cs.LG) ,SEMANTIC SEGMENTATION ,Mast (sailing) ,CYTOLOGY ,Biomedical data ,ENET ,DEEP NEURAL NETWORKS ,SEMANTICS ,FOS: Electrical engineering, electronic engineering, information engineering ,Semantic image segmentation ,Segmentation ,NEURAL NETWORK PERFORMANCE ,Network architecture ,IMAGE SEGMENTATION ,Artificial neural network ,business.industry ,MAST CELLS ,Image and Video Processing (eess.IV) ,UNET ,Pattern recognition ,Image segmentation ,Electrical Engineering and Systems Science - Image and Video Processing ,NETWORK ARCHITECTURE ,BIOMEDICAL SEGMENTATION ,CELLS ,Deep neural networks ,Artificial intelligence ,business - Abstract
Deep neural networks show high accuracy in theproblem of semantic and instance segmentation of biomedicaldata. However, this approach is computationally expensive. Thecomputational cost may be reduced with network simplificationafter training or choosing the proper architecture, which providessegmentation with less accuracy but does it much faster. In thepresent study, we analyzed the accuracy and performance ofUNet and ENet architectures for the problem of semantic imagesegmentation. In addition, we investigated the ENet architecture by replacing of some convolution layers with box-convolutionlayers. The analysis performed on the original dataset consisted of histology slices with mast cells. These cells provide a region forsegmentation with different types of borders, which vary fromclearly visible to ragged. ENet was less accurate than UNet byonly about 1-2%, but ENet performance was 8-15 times faster than UNet one., Comment: 4 pages, 5 figures, 1 table
- Published
- 2019
13. Uncertainty-aware temporal self-learning (UATS): Semi-supervised learning for segmentation of prostate zones and beyond.
- Author
-
Meyer, Anneke, Ghosh, Suhita, Schindele, Daniel, Schostak, Martin, Stober, Sebastian, Hansen, Christian, and Rak, Marko
- Subjects
- *
PROSTATE , *CONVOLUTIONAL neural networks , *SUPERVISED learning , *DEEP learning - Abstract
Various convolutional neural network (CNN) based concepts have been introduced for the prostate's automatic segmentation and its coarse subdivision into transition zone (TZ) and peripheral zone (PZ). However, when targeting a fine-grained segmentation of TZ, PZ, distal prostatic urethra (DPU) and the anterior fibromuscular stroma (AFS), the task becomes more challenging and has not yet been solved at the level of human performance. One reason might be the insufficient amount of labeled data for supervised training. Therefore, we propose to apply a semi-supervised learning (SSL) technique named uncertainty-aware temporal self-learning (UATS) to overcome the expensive and time-consuming manual ground truth labeling. We combine the SSL techniques temporal ensembling and uncertainty-guided self-learning to benefit from unlabeled images, which are often readily available. Our method significantly outperforms the supervised baseline and obtained a Dice coefficient (DC) of up to 78.9%, 87.3%, 75.3%, 50.6% for TZ, PZ, DPU and AFS, respectively. The obtained results are in the range of human inter-rater performance for all structures. Moreover, we investigate the method's robustness against noise and demonstrate the generalization capability for varying ratios of labeled data and on other challenging tasks, namely the hippocampus and skin lesion segmentation. UATS achieved superiority segmentation quality compared to the supervised baseline, particularly for minimal amounts of labeled data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. EFNet: evidence fusion network for tumor segmentation from PET-CT volumes.
- Author
-
Diao Z, Jiang H, Han XH, Yao YD, and Shi T
- Subjects
- Humans, Image Processing, Computer-Assisted, Neural Networks, Computer, Neoplasms diagnostic imaging, Positron Emission Tomography Computed Tomography methods
- Abstract
Precise delineation of target tumor from positron emission tomography-computed tomography (PET-CT) is a key step in clinical practice and radiation therapy. PET-CT co-segmentation actually uses the complementary information of two modalities to reduce the uncertainty of single-modal segmentation, so as to obtain more accurate segmentation results. At present, the PET-CT segmentation methods based on fully convolutional neural network (FCN) mainly adopt image fusion and feature fusion. The current fusion strategies do not consider the uncertainty of multi-modal segmentation and complex feature fusion consumes more computing resources, especially when dealing with 3D volumes. In this work, we analyze the PET-CT co-segmentation from the perspective of uncertainty, and propose evidence fusion network (EFNet). The network respectively outputs PET result and CT result containing uncertainty by proposed evidence loss, which are used as PET evidence and CT evidence. Then we use evidence fusion to reduce uncertainty of single-modal evidence. The final segmentation result is obtained based on evidence fusion of PET evidence and CT evidence. EFNet uses the basic 3D U-Net as backbone and only uses simple unidirectional feature fusion. In addition, EFNet can separately train and predict PET evidence and CT evidence, without the need for parallel training of two branch networks. We do experiments on the soft-tissue-sarcomas and lymphoma datasets. Compared with 3D U-Net, our proposed method improves the Dice by 8% and 5% respectively. Compared with the complex feature fusion method, our proposed method improves the Dice by 7% and 2% respectively. Our results show that in PET-CT segmentation methods based on FCN, by outputting uncertainty evidence and evidence fusion, the network can be simplified and the segmentation results can be improved., (© 2021 Institute of Physics and Engineering in Medicine.)
- Published
- 2021
- Full Text
- View/download PDF
15. Deep learning in data annotation : projection-based 2.5D U-net structure for biomedical volumetric segmentation
- Author
-
Angermann, Christoph Helmut and Angermann, Christoph Helmut
- Abstract
von Christoph Helmut Angermann, Masterarbeit University of Innsbruck 2019
16. Deep learning in data annotation : projection-based 2.5D U-net structure for biomedical volumetric segmentation
- Author
-
Angermann, Christoph Helmut and Angermann, Christoph Helmut
- Abstract
von Christoph Helmut Angermann, Masterarbeit University of Innsbruck 2019
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.