1. Machine learning-based multimodal prediction of language outcomes in chronic aphasia
- Author
-
Leonardo Bonilha, Roger D. Newman-Norlund, Julius Fridriksson, Alexandra Basilakos, Argye E. Hillis, Chris Rorden, Sigfus Kristinsson, Grigori Yourganov, Feifei Xiao, and Wanfang Zhang
- Subjects
Male ,Support Vector Machine ,Computer science ,computer.software_genre ,Multimodal Imaging ,Severity of Illness Index ,0302 clinical medicine ,Outcome Assessment, Health Care ,Research Articles ,Aged, 80 and over ,Language Tests ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,05 social sciences ,fMRI ,Middle Aged ,Magnetic Resonance Imaging ,Stroke ,Diffusion Tensor Imaging ,Neurology ,Cerebral blood flow ,Cerebrovascular Circulation ,Female ,CBF ,Anatomy ,medicine.symptom ,Research Article ,Adult ,FA ,chronic aphasia ,Neuroimaging ,Machine learning ,050105 experimental psychology ,lesion ,03 medical and health sciences ,Aphasia ,Fractional anisotropy ,medicine ,Humans ,0501 psychology and cognitive sciences ,Radiology, Nuclear Medicine and imaging ,Western Aphasia Battery ,Aged ,Modalities ,Modality (human–computer interaction) ,business.industry ,Functional Neuroimaging ,multimodal ,Chronic Disease ,Neurology (clinical) ,Artificial intelligence ,Functional magnetic resonance imaging ,business ,computer ,030217 neurology & neurosurgery - Abstract
Recent studies have combined multiple neuroimaging modalities to gain further understanding of the neurobiological substrates of aphasia. Following this line of work, the current study uses machine learning approaches to predict aphasia severity and specific language measures based on a multimodal neuroimaging dataset. A total of 116 individuals with chronic left‐hemisphere stroke were included in the study. Neuroimaging data included task‐based functional magnetic resonance imaging (fMRI), diffusion‐based fractional anisotropy (FA)‐values, cerebral blood flow (CBF), and lesion‐load data. The Western Aphasia Battery was used to measure aphasia severity and specific language functions. As a primary analysis, we constructed support vector regression (SVR) models predicting language measures based on (i) each neuroimaging modality separately, (ii) lesion volume alone, and (iii) a combination of all modalities. Prediction accuracy across models was subsequently statistically compared. Prediction accuracy across modalities and language measures varied substantially (predicted vs. empirical correlation range: r = .00–.67). The multimodal prediction model yielded the most accurate prediction in all cases (r = .53–.67). Statistical superiority in favor of the multimodal model was achieved in 28/30 model comparisons (p‐value range, The current study used machine learning approaches to predict aphasia severity and specific language measures based on a multimodal neuroimaging dataset. Our findings revealed a complementary advantage of integrating several neuroimaging modalities within the same model framework, as compared to any single modality prediction model.
- Published
- 2020