Back to Search
Start Over
Transformer based Models for Unsupervised Anomaly Segmentation in Brain MR Images
- Publication Year :
- 2022
-
Abstract
- The quality of patient care associated with diagnostic radiology is proportionate to a physician workload. Segmentation is a fundamental limiting precursor to both diagnostic and therapeutic procedures. Advances in machine learning (ML) aim to increase diagnostic efficiency by replacing a single application with generalized algorithms. The goal of unsupervised anomaly detection (UAD) is to identify potential anomalous regions unseen during training, where convolutional neural network (CNN) based autoencoders (AEs) and variational autoencoders (VAEs) are considered a de facto approach for reconstruction based-anomaly segmentation. The restricted receptive field in CNNs limits the CNN to model the global context. Hence, if the anomalous regions cover large parts of the image, the CNN-based AEs are not capable of bringing a semantic understanding of the image. Meanwhile, vision transformers (ViTs) have emerged as a competitive alternative to CNNs. It relies on the self-attention mechanism that can relate image patches to each other. We investigate in this paper Transformer capabilities in building AEs for the reconstruction-based UAD task to reconstruct a coherent and more realistic image. We focus on anomaly segmentation for brain magnetic resonance imaging (MRI) and present five Transformer-based models while enabling segmentation performance comparable to or superior to state-of-the-art (SOTA) models. The source code is made publicly available on GitHub: https://github.com/ahmedgh970/Transformers_Unsupervised_Anomaly_Segmentation.git.
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2207.02059
- Document Type :
- Working Paper