Back to Search Start Over

SF2Former: Amyotrophic lateral sclerosis identification from multi-center MRI data using spatial and frequency fusion transformer.

Authors :
Kushol, Rafsanjany
Luk, Collin C.
Dey, Avyarthana
Benatar, Michael
Briemberg, Hannah
Dionne, Annie
Dupré, Nicolas
Frayne, Richard
Genge, Angela
Gibson, Summer
Graham, Simon J.
Korngut, Lawrence
Seres, Peter
Welsh, Robert C.
Wilman, Alan H.
Zinman, Lorne
Kalra, Sanjay
Yang, Yee-Hong
Source :
Computerized Medical Imaging & Graphics. Sep2023, Vol. 108, pN.PAG-N.PAG. 1p.
Publication Year :
2023

Abstract

Amyotrophic Lateral Sclerosis (ALS) is a complex neurodegenerative disorder characterized by motor neuron degeneration. Significant research has begun to establish brain magnetic resonance imaging (MRI) as a potential biomarker to diagnose and monitor the state of the disease. Deep learning has emerged as a prominent class of machine learning algorithms in computer vision and has shown successful applications in various medical image analysis tasks. However, deep learning methods applied to neuroimaging have not achieved superior performance in classifying ALS patients from healthy controls due to insignificant structural changes correlated with pathological features. Thus, a critical challenge in deep models is to identify discriminative features from limited training data. To address this challenge, this study introduces a framework called S F 2 F o r m e r , which leverages the power of the vision transformer architecture to distinguish ALS subjects from the control group by exploiting the long-range relationships among image features. Additionally, spatial and frequency domain information is combined to enhance the network's performance, as MRI scans are initially captured in the frequency domain and then converted to the spatial domain. The proposed framework is trained using a series of consecutive coronal slices and utilizes pre-trained weights from ImageNet through transfer learning. Finally, a majority voting scheme is employed on the coronal slices of each subject to generate the final classification decision. The proposed architecture is extensively evaluated with multi-modal neuroimaging data (i.e., T1-weighted, R2*, FLAIR) using two well-organized versions of the Canadian ALS Neuroimaging Consortium (CALSNIC) multi-center datasets. The experimental results demonstrate the superiority of the proposed strategy in terms of classification accuracy compared to several popular deep learning-based techniques. • We propose a novel vision transformer model to classify ALS from healthy controls. • We analyze two independent and extensive datasets of 120 and 232 MRI scans. • We leverage multi-center and multi-modal neuroimaging data (T1W, R2*, and FLAIR). • The proposed method demonstrates state-of-the-art classification accuracy. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08956111
Volume :
108
Database :
Academic Search Index
Journal :
Computerized Medical Imaging & Graphics
Publication Type :
Academic Journal
Accession number :
170903733
Full Text :
https://doi.org/10.1016/j.compmedimag.2023.102279