Back to Search Start Over

A double-branch graph convolutional network based on individual differences weakening for motor imagery EEG classification.

Authors :
Ma, Weifeng
Wang, Chuanlai
Sun, Xiaoyong
Lin, Xuefen
Wang, Yuchen
Source :
Biomedical Signal Processing & Control; Jul2023, Vol. 84, pN.PAG-N.PAG, 1p
Publication Year :
2023

Abstract

The emergence of deep learning methods has driven the widespread use of brain–machine interface motor imagery classification in machine control and medical rehabilitation, and has achieved classification accuracy superior to those of traditional machine learning methods. However, models trained using current mainstream deep learning methods show a maximum variation in accuracy of over 20% when using data from different subjects in the same dataset for classification. The large variation indicates the weak robustness of such models and the difficulties in feature extraction for some subjects. As motor imagery classification is aimed at individual users, it is not conducive to the diffusion of the technique if the results vary too much from one user to another. In our research, we have found the accuracy differences between different subjects are caused by the data differ in spatial characteristics and training difficulty. Therefore, exploring the differences between different subjects' data and weakening these differences can reduce the accuracy gap between subjects and ensure that the model can have good classification accuracy for each subject. We call this operation of reducing the accuracy gap individual differences weakening. To implement this operation, we propose a Double-branch Graph Convolutional Attention Neural Network (DGCAN), which uses a graph neural network to filter channels that are less disturbed by spatial location factors, and uses spatial–temporal domain convolution to focus on extracting features contained in the filtered channels, weakening the influence of spatial features contributes to individual differences weakening. We also design a loss function, EegLoss, which focuses on training hard samples and can effectively reduce the model-insensitive data contained in different subjects. We test model performance on the BCI Competition IV datasets 2a and 2b, achieving accuracies of 84% and 86%. We also compare the accuracy gap between subjects, showing that our model is effective in reducing the accuracy gap between subjects and has higher robustness than previous models. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
17468094
Volume :
84
Database :
Supplemental Index
Journal :
Biomedical Signal Processing & Control
Publication Type :
Academic Journal
Accession number :
163974248
Full Text :
https://doi.org/10.1016/j.bspc.2023.104684