Back to Search Start Over

Decoding Imagined Speech From EEG Using Transfer Learning

Authors :
Jerrin Thomas Panachakel
Ramakrishnan Angarai Ganesan
Source :
IEEE Access, Vol 9, Pp 135371-135383 (2021)
Publication Year :
2021
Publisher :
IEEE, 2021.

Abstract

We present a transfer learning-based approach for decoding imagined speech from electroencephalogram (EEG). Features are extracted simultaneously from multiple EEG channels, rather than separately from individual channels. This helps in capturing the interrelationships between the cortical regions. To alleviate the problem of lack of enough data for training deep networks, sliding window-based data augmentation is performed. Mean phase coherence and magnitude-squared coherence, two popular measures used in EEG connectivity analysis, are used as features. These features are compactly arranged, exploiting their symmetry, to obtain a three dimensional “image-like” representation. The three dimensions of this matrix correspond to the alpha, beta and gamma EEG frequency bands. A deep network with ResNet50 as the base model is used for classifying the imagined prompts. The proposed method is tested on the publicly available ASU dataset of imagined speech EEG, comprising four different types of prompts. The accuracy of decoding the imagined prompt varies from a minimum of 79.7% for vowels to a maximum of 95.5% for short-long words across the various subjects. The accuracies obtained are better than the state-of-the-art methods, and the technique is good in decoding prompts of different complexities.

Details

Language :
English
ISSN :
21693536
Volume :
9
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.40ec06f8977c48d9a54d0b649c0701d7
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2021.3116196