Back to Search Start Over

Decoding Imagined Speech from EEG Data: A Hybrid Deep Learning Approach to Capturing Spatial and Temporal Features.

Authors :
Alharbi, Yasser F.
Alotaibi, Yousef A.
Source :
Life (2075-1729). Nov2024, Vol. 14 Issue 11, p1501. 19p.
Publication Year :
2024

Abstract

Neuroimaging is revolutionizing our ability to investigate the brain's structural and functional properties, enabling us to visualize brain activity during diverse mental processes and actions. One of the most widely used neuroimaging techniques is electroencephalography (EEG), which records electrical activity from the brain using electrodes positioned on the scalp. EEG signals capture both spatial (brain region) and temporal (time-based) data. While a high temporal resolution is achievable with EEG, spatial resolution is comparatively limited. Consequently, capturing both spatial and temporal information from EEG data to recognize mental activities remains challenging. In this paper, we represent spatial and temporal information obtained from EEG signals by transforming EEG data into sequential topographic brain maps. We then apply hybrid deep learning models to capture the spatiotemporal features of the EEG topographic images and classify imagined English words. The hybrid framework utilizes a sequential combination of three-dimensional convolutional neural networks (3DCNNs) and recurrent neural networks (RNNs). The experimental results reveal the effectiveness of the proposed approach, achieving an average accuracy of 77.8% in identifying imagined English speech. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20751729
Volume :
14
Issue :
11
Database :
Academic Search Index
Journal :
Life (2075-1729)
Publication Type :
Academic Journal
Accession number :
181166553
Full Text :
https://doi.org/10.3390/life14111501