3,025 results on '"brain–computer interface (BCI)"'
Search Results
152. Home Automation Using Brain–Computer Interface
- Author
-
Raj, Utkarsh, Mukul, Manoj Kumar, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Nath, Vijay, editor, and Mandal, Jyotsna Kumar, editor
- Published
- 2023
- Full Text
- View/download PDF
153. Overlapping filter bank convolutional neural network for multisubject multicategory motor imagery brain-computer interface
- Author
-
Jing Luo, Jundong Li, Qi Mao, Zhenghao Shi, Haiqin Liu, Xiaoyong Ren, and Xinhong Hei
- Subjects
Brain-computer interface (BCI) ,Multisubject BCI ,Motor imagery (MI) ,Convolutional neural network (CNN) ,Overlapping filter bank ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Analysis ,QA299.6-433 - Abstract
Abstract Background Motor imagery brain-computer interfaces (BCIs) is a classic and potential BCI technology achieving brain computer integration. In motor imagery BCI, the operational frequency band of the EEG greatly affects the performance of motor imagery EEG recognition model. However, as most algorithms used a broad frequency band, the discrimination from multiple sub-bands were not fully utilized. Thus, using convolutional neural network (CNNs) to extract discriminative features from EEG signals of different frequency components is a promising method in multisubject EEG recognition. Methods This paper presents a novel overlapping filter bank CNN to incorporate discriminative information from multiple frequency components in multisubject motor imagery recognition. Specifically, two overlapping filter banks with fixed low-cut frequency or sliding low-cut frequency are employed to obtain multiple frequency component representations of EEG signals. Then, multiple CNN models are trained separately. Finally, the output probabilities of multiple CNN models are integrated to determine the predicted EEG label. Results Experiments were conducted based on four popular CNN backbone models and three public datasets. And the results showed that the overlapping filter bank CNN was efficient and universal in improving multisubject motor imagery BCI performance. Specifically, compared with the original backbone model, the proposed method can improve the average accuracy by 3.69 percentage points, F1 score by 0.04, and AUC by 0.03. In addition, the proposed method performed best among the comparison with the state-of-the-art methods. Conclusion The proposed overlapping filter bank CNN framework with fixed low-cut frequency is an efficient and universal method to improve the performance of multisubject motor imagery BCI.
- Published
- 2023
- Full Text
- View/download PDF
154. EEG-based emergency braking intention detection during simulated driving
- Author
-
Xinbin Liang, Yang Yu, Yadong Liu, Kaixuan Liu, Yaru Liu, and Zongtan Zhou
- Subjects
Emergency braking intention ,Electroencephalogram (EEG) ,Detection ,Simulated driving ,Brain-computer interface (BCI) ,Medical technology ,R855-855.5 - Abstract
Abstract Background Current research related to electroencephalogram (EEG)-based driver’s emergency braking intention detection focuses on recognizing emergency braking from normal driving, with little attention to differentiating emergency braking from normal braking. Moreover, the classification algorithms used are mainly traditional machine learning methods, and the inputs to the algorithms are manually extracted features. Methods To this end, a novel EEG-based driver’s emergency braking intention detection strategy is proposed in this paper. The experiment was conducted on a simulated driving platform with three different scenarios: normal driving, normal braking and emergency braking. We compared and analyzed the EEG feature maps of the two braking modes, and explored the use of traditional methods, Riemannian geometry-based methods, and deep learning-based methods to predict the emergency braking intention, all using the raw EEG signals rather than manually extracted features as input. Results We recruited 10 subjects for the experiment and used the area under the receiver operating characteristic curve (AUC) and F1 score as evaluation metrics. The results showed that both the Riemannian geometry-based method and the deep learning-based method outperform the traditional method. At 200 ms before the start of real braking, the AUC and F1 score of the deep learning-based EEGNet algorithm were 0.94 and 0.65 for emergency braking vs. normal driving, and 0.91 and 0.85 for emergency braking vs. normal braking, respectively. The EEG feature maps also showed a significant difference between emergency braking and normal braking. Overall, based on EEG signals, it was feasible to detect emergency braking from normal driving and normal braking. Conclusions The study provides a user-centered framework for human–vehicle co-driving. If the driver's intention to brake in an emergency can be accurately identified, the vehicle's automatic braking system can be activated hundreds of milliseconds earlier than the driver's real braking action, potentially avoiding some serious collisions.
- Published
- 2023
- Full Text
- View/download PDF
155. Brain-computer interfacing for flexion and extension of bio-inspired robot fingers
- Author
-
H.M.K.K.M.B. Herath, W.R. de Mel, and Mamta Mittal
- Subjects
Brain-computer interface (BCI) ,Decoding finger movements ,Motor imagery (MI) ,Non-invasive EEG ,Robotics ,Electronic computers. Computer science ,QA75.5-76.95 ,Science - Abstract
Brain-computer interface (BCI) technology is a topic of study with assistive robot systems that are expanding significantly nowadays. Several advancements have been made in the BCI sector to aid disabled people. However, most studies have used more electrodes, and most design structures are different from the anatomy of the human hand. To control robot fingers like real human hands with fewer electrodes is yet to be developed. This research aims to investigate the controllability of robot fingers using motor imagery. The proposed electroencephalogram (EEG) acquisition system comprises eight EEG electrodes attached to the primary motor cortex region of the human brain. The real human hand's anatomical behavior aided in the robot's development. Initially, the performance of the robot finger model was evaluated on the computer simulation. Finally, a robot model was developed, and flexion and extension movements were examined. According to the experiment's findings, finger flexion and extension control with eight EEG electrodes showed promising results with an accuracy of 90.0±1.43% and a precision of 0.89. Furthermore, we observed that as people age, the accuracy of robot control decreases.
- Published
- 2023
- Full Text
- View/download PDF
156. Study of MI-BCI classification method based on the Riemannian transform of personalized EEG spatiotemporal features
- Author
-
Xiaotong Ding, Lei Yang, and Congsheng Li
- Subjects
riemannian tangent space ,motor imagery (mi) ,electroencephalogram (eeg) ,brain-computer interface (bci) ,filter banks ,neighborhood component analysis ,Biotechnology ,TP248.13-248.65 ,Mathematics ,QA1-939 - Abstract
Motor imagery (MI) is a traditional paradigm of brain-computer interface (BCI) and can assist users in creating direct connections between their brains and external equipment. The common spatial patterns algorithm is the most popular spatial filtering technique for collecting EEG signal features in MI-based BCI systems. Due to the defect that it only considers the spatial information of EEG signals and is susceptible to noise interference and other issues, its performance is diminished. In this study, we developed a Riemannian transform feature extraction method based on filter bank fusion with a combination of multiple time windows. First, we proposed the multi-time window data segmentation and recombination method by combining it with a filter group to create new data samples. This approach could capture individual differences due to the variation in time-frequency patterns across different participants, thereby improving the model's generalization performance. Second, Riemannian geometry was used for feature extraction from non-Euclidean structured EEG data. Then, considering the non-Gaussian distribution of EEG signals, the neighborhood component analysis (NCA) algorithm was chosen for feature selection. Finally, to meet real-time requirements and a low complexity, we employed a Support Vector Machine (SVM) as the classification algorithm. The proposed model achieved improved accuracy and robustness. In this study, we proposed an algorithm with superior performance on the BCI Competition IV dataset 2a, achieving an accuracy of 89%, a kappa value of 0.73 and an AUC of 0.9, demonstrating advanced capabilities. Furthermore, we analyzed data collected in our laboratory, and the proposed method achieved an accuracy of 77.4%, surpassing other comparative models. This method not only significantly improved the classification accuracy of motor imagery EEG signals but also bore significant implications for applications in the fields of brain-computer interfaces and neural engineering.
- Published
- 2023
- Full Text
- View/download PDF
157. A deep learning-based brain-computer interaction system for speech and motor impairment
- Author
-
Nader A. Rahman Mohamed
- Subjects
Electroencephalogram (EEG) ,Brain-Computer Interface (BCI) ,Event Related Potential (ERP) ,Convolutional Neural Network (CNN) ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Abstract Some people may experience accidents, strokes, or diseases that lead to both motor and speech disabilities, making it difficult to communicate with others. Those with paralysis face daily challenges in meeting their basic needs, particularly if they have difficulty speaking. Individuals with dysarthria, amyotrophic lateral sclerosis, and similar conditions may find it challenging to understand speech. The proposed system for automatic recognition of daily basic needs aims to improve the quality of life for individuals suffering from dysarthria and quadriplegic paralysis. The system achieves this by recognizing and analyzing brain signals and converting them to either audible voice commands or texts that can be sent to a healthcare provider's mobile phone based on the system settings. The proposed system uses a convolutional neural network (CNN) model to detect event-related potentials (ERPs) within the EEG signal to select one of six basic daily needs while displaying their images randomly. Ten volunteers participated in this study, contributing to the creation of the dataset used for training, testing, and validation. The proposed approach achieved an accuracy of 78.41%.
- Published
- 2023
- Full Text
- View/download PDF
158. CTNet: a convolutional transformer network for EEG-based motor imagery classification
- Author
-
Zhao, Wei, Jiang, Xiaolu, Zhang, Baocan, Xiao, Shixiao, and Weng, Sujun
- Published
- 2024
- Full Text
- View/download PDF
159. A hybrid brain-computer interface using motor imagery and SSVEP Based on convolutional neural network
- Author
-
Wenwei Luo, Wanguang Yin, Quanying Liu, and Youzhi Qu
- Subjects
brain-computer interface (bci) ,convolutional neural networks (cnns) ,electroencephalography (eeg) ,steady-state visual evoked potential (ssvep) ,motor imagery (mi) ,Neurology. Diseases of the nervous system ,RC346-429 - Abstract
The key to electroencephalography (EEG)-based brain-computer interface (BCI) lies in neural decoding, and its accuracy can be improved by using hybrid BCI paradigms, that is, fusing multiple paradigms. However, hybrid BCIs usually require separate processing processes for EEG signals in each paradigm, which greatly reduces the efficiency of EEG feature extraction and the generalizability of the model. Here, we propose a two-stream convolutional neural network (TSCNN) based hybrid brain-computer interface. It combines steady-state visual evoked potential (SSVEP) and motor imagery (MI) paradigms. TSCNN automatically learns to extract EEG features in the two paradigms in the training process and improves the decoding accuracy by 25.4% compared with the MI mode, and 2.6% compared with SSVEP mode in the test data. Moreover, the versatility of TSCNN is verified as it provides considerable performance in both single-mode (70.2% for MI, 93.0% for SSVEP) and hybrid-mode scenarios (95.6% for MI-SSVEP hybrid). Our work will facilitate the real-world applications of EEG-based BCI systems.
- Published
- 2023
- Full Text
- View/download PDF
160. A survey of deep learning-based classification methods for steady-state visual evoked potentials
- Author
-
Yudong Pan, Jianbo Chen, and Yangsong Zhang
- Subjects
deep learning (dl) ,brain-computer interface (bci) ,steady-state visual evoked potential (ssvep) ,frequency recognition ,Neurology. Diseases of the nervous system ,RC346-429 - Abstract
Purpose Steady-state visual evoked potential (SSVEP) based BCI has attracted great interests owing to the high information transfer rate (ITR) and little training requirement. The performance of SSVEP-based BCI heavily depends on the classification methods. Deep Learning (DL) technology provides an alternative avenue for the data classification in SSVEP-based BCI, and has received increasing interests in recent years. This review aimed to summarize the progress of DL-based classification methods for SSVEP data over the past decade. Materials and method The literature was searched and selected based on the research topics of DL and SSVEP. We categorized these methods into four classes, i.e., traditional neural network structures-based DL methods, traditional frequency recognition methods inspiring DL methods, attention mechanisms-based DL models, and transfer learning technology-based DL methods, and generative model-based recognition method. Moreover, we analyzed the current challenges and presented future research opportunities. Conclusions This study provides a systematic description of the current development status on DL-based SSVEP classification methods, and sheds insight on future researches.
- Published
- 2023
- Full Text
- View/download PDF
161. Corrigendum: Riemannian geometry-based metrics to measure and reinforce user performance changes during brain-computer interface user training
- Author
-
Nicolas Ivanov and Tom Chau
- Subjects
brain-computer interface (BCI) ,electroencephalography (EEG) ,user training ,Riemannian geometry ,user evaluation ,simulation ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Published
- 2023
- Full Text
- View/download PDF
162. Editorial: Ear-centered sensing: from sensing principles to research and clinical devices, volume II
- Author
-
Jérémie Voix, Preben Kidmose, and Martin Georg Bleichner
- Subjects
EEG ,brain activity ,in-ear ,brain-computer interface (BCI) ,wearable and mobile computing ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Published
- 2023
- Full Text
- View/download PDF
163. Fatigue factors and fatigue indices in SSVEP-based brain-computer interfaces: a systematic review and meta-analysis
- Author
-
Maedeh Azadi Moghadam and Ali Maleki
- Subjects
brain-computer interface (BCI) ,steady-state visual evoked potential (SSVEP) ,fatigue ,visual stimulation paradigm ,quantitative indices ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
BackgroundFatigue is a serious challenge when applying a steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) in the real world. Many researchers have used quantitative indices to study the effect of visual stimuli on fatigue. According to a wide range of studies in fatigue analysis, there are contradictions and inconsistencies in the behavior of fatigue indicators.New methodIn this study, for the first time, a systematic review and meta-analysis were performed on fatigue indices and fatigue caused by stimulation paradigm. We queried three scientific search engines for studies published between 2000 and 2022. The inclusion criteria were papers investigating mental and visual fatigue from performing a visual task using electroencephalogram (EEG) signals.ResultsAttractiveness and variation are the most effective ways to reduce BCI fatigue. Therefore, zoom motion, Newton’s ring motion, and cue patterns reduce fatigue. While the color of the cue could effectively reduce fatigue, its shape and background had no effect on fatigue. Additionally, the questionnaire and quantitative indicators such as frequency indices, signal-to-noise ratio (SNR), SSVEP amplitude, and multiscale entropy were utilized to assess fatigue. Meta-analysis indicated that when a person is fatigued, the spectrum amplitude of alpha, theta, and α+θ/β increase significantly, while SNR and SSVEP amplitude decrease significantly.ConclusionThe outcomes of this study can be used to design more optimal stimulation protocols that cause less fatigue. Moreover, the level of fatigue can be quantitatively assessed with indicators without the participant’s self-reports.
- Published
- 2023
- Full Text
- View/download PDF
164. Influence of spatial frequency in visual stimuli for cVEP-based BCIs: evaluation of performance and user experience
- Author
-
Álvaro Fernández-Rodríguez, Víctor Martínez-Cagigal, Eduardo Santamaría-Vázquez, Ricardo Ron-Angevin, and Roberto Hornero
- Subjects
brain-computer interface (BCI) ,code-modulated visual evoked potential (c-VEP) ,stimulus ,spatial frequency ,checkerboard ,visual fatigue ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Code-modulated visual evoked potentials (c-VEPs) are an innovative control signal utilized in brain-computer interfaces (BCIs) with promising performance. Prior studies on steady-state visual evoked potentials (SSVEPs) have indicated that the spatial frequency of checkerboard-like stimuli influences both performance and user experience. Spatial frequency refers to the dimensions of the individual squares comprising the visual stimulus, quantified in cycles (i.e., number of black-white squares pairs) per degree of visual angle. However, the specific effects of this parameter on c-VEP-based BCIs remain unexplored. Therefore, the objective of this study is to investigate the role of spatial frequency of checkerboard-like visual stimuli in a c-VEP-based BCI. Sixteen participants evaluated selection matrices with eight spatial frequencies: C001 (0 c/°, 1×1 squares), C002 (0.15 c/°, 2×2 squares), C004 (0.3 c/°, 4×4 squares), C008 (0.6 c/°, 8×8 squares), C016 (1.2 c/°, 16×16 squares), C032 (2.4 c/°, 32×32 squares), C064 (4.79 c/°, 64×64 squares), and C128 (9.58 c/°, 128×128 squares). These conditions were tested in an online spelling task, which consisted of 18 trials each conducted on a 3×3 command interface. In addition to accuracy and information transfer rate (ITR), subjective measures regarding comfort, ocular irritation, and satisfaction were collected. Significant differences in performance and comfort were observed based on different stimulus spatial frequencies. Although all conditions achieved mean accuracy over 95% after 2.1 s of trial duration, C016 stood out in terms user experience. The proposed condition not only achieved a mean accuracy of 96.53% and 164.54 bits/min with a trial duration of 1.05s, but also was reported to be significantly more comfortable than the traditional C001 stimulus. Since both features are key for BCI development, higher spatial frequencies than the classical black-to-white stimulus might be more adequate for c-VEP systems. Hence, we assert that the spatial frequency should be carefully considered in the development of future applications for c-VEP-based BCIs.
- Published
- 2023
- Full Text
- View/download PDF
165. Editorial: Advances in brain-computer interface technologies for closed-loop neuromodulation
- Author
-
Yuchen Xu, Hubin Zhao, and Cosimo Ieracitano
- Subjects
brain-computer interface (BCI) ,neuromodulation ,fNIR ,EEG ,sensors ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Published
- 2023
- Full Text
- View/download PDF
166. IMH-Net: a convolutional neural network for end-to-end EEG motor imagery classification.
- Author
-
Liu, Menghao, Li, Tingting, Zhang, Xu, Yang, Yang, Zhou, Zhiyong, and Fu, Tianhao
- Abstract
Abstract As the main component of Brain-computer interface (BCI) technology, the classification algorithm based on EEG has developed rapidly. The previous algorithms were often based on subject-dependent settings, resulting in BCI needing to be calibrated for new users. In this work, we propose IMH-Net, an end-to-end subject-independent model. The model first uses Inception blocks extracts the frequency domain features of the data, then further compresses the feature vectors to extract the spatial domain features, and finally learns the global information and classification through Multi-Head Attention mechanism. On the OpenBMI dataset, IMH-Net obtained 73.90 ± 13.10% accuracy and 73.09 ± 14.99% F1-score in subject-independent manner, which improved the accuracy by 1.96% compared with the comparison model. On the BCI competition IV dataset 2a, this model also achieved the highest accuracy and F1-score in subject-dependent manner. The IMH-Net model we proposed can improve the accuracy of subject-independent Motor Imagery (MI), and the robustness of the algorithm is high, which has strong practical value in the field of BCI. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
167. Normalized deep learning algorithms based information aggregation functions to classify motor imagery EEG signal.
- Author
-
Al-Hamadani, Ammar A., Mohammed, Mamoun J., and Tariq, Suphian M.
- Subjects
- *
MACHINE learning , *MOTOR imagery (Cognition) , *DEEP learning , *CONVOLUTIONAL neural networks , *FAST Fourier transforms , *ELECTROENCEPHALOGRAPHY - Abstract
Recently, the discipline of Brain-Computer-Interface (BCI) has attracted attention to exploiting Electroencephalograph (EEG) mental activities such as Motor Imagery (MI). Neurons in the human brain are activated during these MI tasks and generate an electrical potential of small magnitude reached to the scalp as a signal. Classification of MI data is a primary problem in BCI systems. Classification accuracy of these biomedical signals emerges as a significant task in the scientific community. This work proposes two main ideas: a new preprocessing technique based on four EEG frequency bands and a new stacking method for three deep-learning architectures used to decode three classes of MI signals. The preprocessing stage was introduced using Fast Fourier Transform to perform frequency analysis and data aggregation functions to enhance the data view. Performance was evaluated using well-defined metrics: accuracy, precision, recall, and f1-score for multiple batch sizes, optimizers, and epochs. Experimental results were evaluated using a publicly available dataset (BCI Competition IV dataset 2a) and local data collected from four subjects using the EMOTIV EPOC headset. The highest f1-scores achieved with the R-CNN model were 94% and 84% using the aforementioned datasets. Our proposed models also outperform many related models studied in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
168. Classifying human emotions in HRI: applying global optimization model to EEG brain signals.
- Author
-
Staffa, Mariacarla, D'Errico, Lorenzo, Sansalone, Simone, and Alimardani, Maryam
- Subjects
GLOBAL optimization ,EMOTIONS ,HUMAN-robot interaction ,ELECTROENCEPHALOGRAPHY ,SOCIAL robots ,THEORY of mind ,MACHINE learning - Abstract
Significant efforts have been made in the past decade to humanize both the form and function of social robots to increase their acceptance among humans. To this end, social robots have recently been combined with brain-computer interface (BCI) systems in an attempt to give theman understanding of humanmental states, particularly emotions. However, emotion recognition using BCIs poses several challenges, such as subjectivity of emotions, contextual dependency, and a lack of reliable neuro-metrics for real-time processing of emotions. Furthermore, the use of BCI systems introduces its own set of limitations, such as the bias-variance trade-off, dimensionality, and noise in the input data space. In this study, we sought to address some of these challenges by detecting human emotional states from EEG brain activity during human-robot interaction (HRI). EEG signals were collected from 10 participants who interacted with a Pepper robot that demonstrated either a positive or negative personality. Using emotion valence and arousal measures derived from frontal brain asymmetry (FBA), several machine learning models were trained to classify human's mental states in response to the robot personality. To improve classification accuracy, all proposed classifiers were subjected to a Global Optimization Model (GOM) based on feature selection and hyperparameter optimization techniques. The results showed that it is possible to classify a user's emotional responses to the robot's behavior from the EEG signals with an accuracy of up to 92%. The outcome of the current study contributes to the first level of the Theory of Mind (ToM) in Human-Robot Interaction, enabling robots to comprehend users' emotional responses and attribute mental states to them. Our work advances the field of social and assistive robotics by paving the way for the development of more empathetic and responsive HRI in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
169. MI-DAGSC: A domain adaptation approach incorporating comprehensive information from MI-EEG signals.
- Author
-
Zhang, Dongxue, Li, Huiying, Xie, Jingmeng, and Li, Dajun
- Subjects
- *
BRAIN-computer interfaces , *MOTOR imagery (Cognition) , *ELECTROENCEPHALOGRAPHY , *BLOCK designs , *NEUROSCIENCES - Abstract
Non-stationarity of EEG signals leads to high variability between subjects, making it challenging to directly use data from other subjects (source domain) for the classifier in the current subject (target domain). In this study, we propose MI-DAGSC to address domain adaptation challenges in EEG-based motor imagery (MI) decoding. By combining domain-level information, class-level information, and inter-sample structure information, our model effectively aligns the feature distributions of source and target domains. This work is an extension of our previous domain adaptation work MI-DABAN (Li et al., 2023). Based on MI-DABAN, MI-DAGSC designs Sample-Feature Blocks (SFBs) and Graph Convolution Blocks (GCBs) to focus on intra-sample and inter-sample information. The synergistic integration of SFBs and GCBs enable the model to capture comprehensive information and understand the relationship between samples, thus improving representation learning. Furthermore, we introduce a triplet loss to enhance the alignment and compactness of feature representations. Extensive experiments on real EEG datasets demonstrate the effectiveness of MI-DAGSC, confirming that our method makes a valuable contribution to the MI-EEG decoding. Moreover, it holds great potential for various applications in brain–computer interface systems and neuroscience research. And the code of the proposed architecture in this study is available under https://github.com/zhangdx21/MI-DAGSC. • A model for reducing individual variability and leveraging all subjects' labeled data. • A domain adaptive method considering comprehensive information. • A framework for exploring inter-sample knowledge in EEG signals. • Non-shared Adversarial Method for Precise Distribution Alignment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
170. Brain Computer Interface Based Thought Recognition System Using a Hybrid Deep Learning Model.
- Author
-
D.A, Janeera and Sasipriya, S.
- Subjects
- *
COMPUTER interfaces , *DEEP learning , *CONVOLUTIONAL neural networks , *RECURRENT neural networks , *BRAIN-computer interfaces , *ELECTRONIC equipment - Abstract
Brain-Computer Interface (BCI) connects the human brain with computers and electronic devices. The signals from human brain are processed using several deep-learning techniques to convert them into a comprehensible form. Among these techniques, the convolutional neural network (CNN) model has excellent performance in BCI recognition. However, the existing CNN model is prone to over-fitting and has limitations with accuracy. The model complexity must be increased to achieve better accuracy. To address these concerns, a novel hybrid R-CNN model for BCI thought recognition is proposed in this work. The convolution layer of CNN and the Long short-term memory (LSTM) layer of recurrent neural network (RNN) is utilized for this purpose. A batch normalization layer is also instigated to reduce over-fitting. Further, a rectified linear unit (ReLU) is engaged to speed up training under as low as five epochs, along with a custom optimizer which optimizes some default values within the optimizer. Experiments are performed with BCI datasets of two different file sizes with different records. The first dataset size is 6.5 MB having 60684 records with three classes, and the second dataset size is 10.1 MB having 94119 records with five classes. Consequently, the proposed hybrid model exhibits a higher average accuracy of 95% for 6.5 MB file size and 98% for 10.1 MB file size, which is superior compared to the accuracy of existing deep learning models. Furthermore, the efficiency of the proposed novel hybrid R-CNN model is evaluated with some other performance measures such as F1-score, recall and precision. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
171. Enhancing Cross-Subject Motor Imagery Classification in EEG-Based Brain–Computer Interfaces by Using Multi-Branch CNN.
- Author
-
Chowdhury, Radia Rayan, Muhammad, Yar, and Adeel, Usman
- Subjects
- *
BRAIN-computer interfaces , *CONVOLUTIONAL neural networks , *MOTOR imagery (Cognition) , *HUMAN activity recognition , *ELECTROENCEPHALOGRAPHY , *DEEP learning , *NEURAL computers , *FEATURE extraction - Abstract
A brain–computer interface (BCI) is a computer-based system that allows for communication between the brain and the outer world, enabling users to interact with computers using neural activity. This brain signal is obtained from electroencephalogram (EEG) signals. A significant obstacle to the development of BCIs based on EEG is the classification of subject-independent motor imagery data since EEG data are very individualized. Deep learning techniques such as the convolutional neural network (CNN) have illustrated their influence on feature extraction to increase classification accuracy. In this paper, we present a multi-branch (five branches) 2D convolutional neural network that employs several hyperparameters for every branch. The proposed model achieved promising results for cross-subject classification and outperformed EEGNet, ShallowConvNet, DeepConvNet, MMCNN, and EEGNet_Fusion on three public datasets. Our proposed model, EEGNet Fusion V2, achieves 89.6% and 87.8% accuracy for the actual and imagined motor activity of the eegmmidb dataset and scores of 74.3% and 84.1% for the BCI IV-2a and IV-2b datasets, respectively. However, the proposed model has a bit higher computational cost, i.e., it takes around 3.5 times more computational time per sample than EEGNet_Fusion. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
172. Classification Algorithm for Electroencephalogram-based Motor Imagery Using Hybrid Neural Network with Spatio-temporal Convolution and Multi-head Attention Mechanism.
- Author
-
Shi, Xingbin, Li, Baojiang, Wang, Wenlong, Qin, Yuxin, Wang, Haiyan, and Wang, Xichao
- Subjects
- *
CONVOLUTIONAL neural networks , *MOTOR imagery (Cognition) , *CLASSIFICATION algorithms , *ELECTROENCEPHALOGRAPHY , *BRAIN-computer interfaces , *SPATIOTEMPORAL processes - Abstract
• The increase of decoding precision of motor imaging EEG signal. • The hybrid temporal-spatial convolution and Transformer method achieves good results. • Multi-head self-attention mechanism can better extract features. • The training parameters are few, the time is short, and the precision is high. Motor imagery (MI) is a brain-computer interface (BCI) technique in which specific brain regions are activated when people imagine their limbs (or muscles) moving, even without actual movement. The technology converts electroencephalogram (EEG) signals generated by the brain into computer-readable commands by measuring neural activity. Classification of motor imagery is one of the tasks in BCI. Researchers have done a lot of work on motor imagery classification, and the existing literature has relatively mature decoding methods for two-class motor tasks. However, as the categories of EEG-based motor imagery tasks increase, further exploration is needed for decoding research on four-class motor imagery tasks. In this study, we designed a hybrid neural network that combines spatiotemporal convolution and attention mechanisms. Specifically, the data is first processed by spatiotemporal convolution to extract features and then processed by a Multi-branch Convolution block. Finally, the processed data is input into the encoder layer of the Transformer for a self-attention calculation to obtain the classification results. Our approach was tested on the well-known MI datasets BCI Competition IV 2a and 2b, and the results show that the 2a dataset has a global average classification accuracy of 83.3% and a kappa value of 0.78. Experimental results show that the proposed method outperforms most of the existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
173. Dictionary Learning and Greedy Algorithms for Removing Eye Blink Artifacts from EEG Signals.
- Author
-
Sreeja, S. R., Rajmohan, Shathanaa, Sodhi, Manjit Singh, Samanta, Debasis, and Mitra, Pabitra
- Subjects
- *
MACHINE learning , *GREEDY algorithms , *ORTHOGONAL matching pursuit , *BLINKING (Physiology) , *ELECTROENCEPHALOGRAPHY , *BRAIN-computer interfaces , *WAKEFULNESS - Abstract
Brain activities recorded using electroencephalography (EEG) device are mostly contaminated with eye blink (EB) artifact. This artifact leads to poor performance of brain–computer interface (BCI) systems. Hence, for the better performance of BCI systems, EB artifacts need to be removed from EEG signals without any loss of information. Of several methods that exists in the literature to remove EB artifacts, sparsity-based method is one among them and it proved to be good in removing EB artifacts. In the sparsity-based method, an over-complete dictionary is learned from the EEG data itself using K-SVD-based algorithm and is designed to model EB characteristics. In this work, two different greedy algorithms, namely orthogonal matching pursuit (OMP) and adaptive OMP (A-OMP), have been applied over K-SVD algorithm to check its performance on removing EB artifacts from EEG signals. To prove the efficiency of the greedy algorithms, the experiment is done with real EEG data. The results observed show that A-OMP is computationally more efficient and can accomplish successful sparse representation on EEG signals. Moreover, this sparsity-based algorithm can eliminate EB artifact accurately from the EEG signals. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
174. Brain–Computer Interface: The HOL–SSA Decomposition and Two-Phase Classification on the HGD EEG Data.
- Author
-
Antony, Mary Judith, Sankaralingam, Baghavathi Priya, Khan, Shakir, Almjally, Abrar, Almujally, Nouf Abdullah, and Mahendran, Rakesh Kumar
- Subjects
- *
BRAIN-computer interfaces , *ELECTROENCEPHALOGRAPHY , *INDEPENDENT component analysis , *MOTOR imagery (Cognition) , *SPECTRUM analysis - Abstract
An efficient processing approach is essential for increasing identification accuracy since the electroencephalogram (EEG) signals produced by the Brain–Computer Interface (BCI) apparatus are nonlinear, nonstationary, and time-varying. The interpretation of scalp EEG recordings can be hampered by nonbrain contributions to electroencephalographic (EEG) signals, referred to as artifacts. Common disturbances in the capture of EEG signals include electrooculogram (EOG), electrocardiogram (ECG), electromyogram (EMG) and other artifacts, which have a significant impact on the extraction of meaningful information. This study suggests integrating the Singular Spectrum Analysis (SSA) and Independent Component Analysis (ICA) methods to preprocess the EEG data. The key objective of our research was to employ Higher-Order Linear-Moment-based SSA (HOL–SSA) to decompose EEG signals into multivariate components, followed by extracting source signals using Online Recursive ICA (ORICA). This approach effectively improves artifact rejection. Experimental results using the motor imagery High-Gamma Dataset validate our method's ability to identify and remove artifacts such as EOG, ECG, and EMG from EEG data, while preserving essential brain activity. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
175. SSVEP-based brain–computer interface for music using a low-density EEG system.
- Author
-
Venkatesh, Satvik, Miranda, Eduardo Reck, and Braund, Edward
- Abstract
In this paper, we present a bespoke brain–computer interface (BCI), which was developed for a person with severe motor-impairments, who was previously a Violinist, to allow performing and composing music at home. It uses steady-state visually evoked potential (SSVEP) and adopts a dry, low-density, and wireless electroencephalogram (EEG) headset. In this study, we investigated two parameters: (1) placement of the EEG headset and (2) inter-stimulus distance and found that the former significantly improved the information transfer rate (ITR). To analyze EEG, we adopted canonical correlation analysis (CCA) without weight-calibration. The BCI for musical performance realized a high ITR of 37.59 ± 9.86 bits min
−1 and a mean accuracy of 88.89 ± 10.09%. The BCI for musical composition obtained an ITR of 14.91 ± 2.87 bits min−1 and a mean accuracy of 95.83 ± 6.97%. The BCI was successfully deployed to the person with severe motor-impairments. She regularly uses it for musical composition at home, demonstrating how BCIs can be translated from laboratories to real-world scenarios. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
176. Linear Diophantine equation (LDE) decoder: A training‐free decoding algorithm for multifrequency SSVEP with reduced computation cost.
- Author
-
Mu, Jing, Tan, Ying, Grayden, David B., and Oetomo, Denny
- Subjects
DIOPHANTINE equations ,DECODING algorithms ,LINEAR equations ,DECODERS & decoding ,VISUAL evoked potentials ,TIME complexity - Abstract
Multifrequency steady‐state visual evoked potentials (SSVEPs) have been developed to extend the capability of SSVEP‐based brain‐machine interfaces (BMIs) to complex applications that have large numbers of targets. Even though various multifrequency stimulation methods have been introduced, the decoding algorithms for multifrequency SSVEP are still in early development. The recently developed multifrequency canonical correlation analysis (MFCCA) was shown to be a feasible training‐free option to use in decoding multifrequency SSVEPs. However, the time complexity of MFCCA is shown to be O(n3)$$ O\left({n}^3\right) $$, which will lead to long computation time as n$$ n $$ grows, where n$$ n $$ represents the input size in decoding. In this paper, a novel decoding algorithm is proposed with the aim to reduce the time complexity. This algorithm is based on linear Diophantine equation solvers and has a reduced computation cost O(nlogn)$$ O(nlogn) $$ while remaining training‐free. Our simulation results demonstrated that linear Diophantine equation (LDE) decoder run time is only one fifth of MFCCA run time under respective optimal settings on 5‐s single‐channel data. This reduced computation cost makes it easier to implement multifrequency SSVEP in real‐time systems. The effectiveness of this new decoding algorithm is validated with nine healthy participants when using dry electrode scalp electroencephalography (EEG). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
177. Two‐phase classification: ANN and A‐SVM classifiers on motor imagery BCI.
- Author
-
A, Mary Judith, S, Baghavathi Priya, Mahendran, Rakesh Kumar, Gadekallu, Thippa Reddy, and Ambati, Loknath Sai
- Subjects
MOTOR imagery (Cognition) ,BRAIN-computer interfaces ,SENSORIMOTOR cortex ,FEATURE extraction ,CLASSIFICATION algorithms ,ELECTROENCEPHALOGRAPHY ,INDEPENDENT component analysis - Abstract
Brain–Computer Interfaces (BCIs) based on Electroencephalograms (EEG) monitor mental activity with the ultimate objective of allowing people to communicate with computers only via their thoughts. Users must create precise cerebral activity patterns that the system uses as control signals to do this. A common activity used to elicit such signals is Motor Imagery (MI), in which certain signals are created in the sensorimotor cortex while imagining the movements. The three phases of the traditional EEG–BCI processing pipeline are preprocessing, feature extraction, and classification. We provide categorization advances and track performance gains in 4‐class MI‐based BCIs. In this study, 4‐class MI events are produced via an illusory elevation of the left hand, right hand, feet, and tongue. Finally, a two‐phase classification technique is provided with ANN classifiers being used in the first phase to discriminate between different pair‐wise MI tasks. Secondly, an adaptive SVM classifier is used to assess the user's end task based on the weighted outputs of the classifiers. An adaptive classifier is one technique to maintain consistency in performance, reduce training time, and eliminate non‐stationaries, all of which are required for efficient BCI performance. The suggested approach outperformed conventional two‐stage classification algorithms on MI data, according to experimental findings. The average classification accuracy of this technique is 96% for datasets BCI competition IV 2a. This is a 4% improvement over the comparison approach. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
178. Towards best practice of interpreting deep learning models for EEG-based brain computer interfaces.
- Author
-
Jian Cui, Liqiang Yuan, Zhaoxiang Wang, Ruilin Li, and Tianzi Jiang
- Subjects
DEEP learning ,COMPUTER interfaces ,CONVOLUTIONAL neural networks ,BEST practices ,TRUST - Abstract
Introduction: As deep learning has achieved state-of-the-art performance for many tasks of EEG-based BCI, many efforts have been made in recent years trying to understand what have been learned by the models. This is commonly done by generating a heatmap indicating to which extent each pixel of the input contributes to the final classification for a trained model. Despite the wide use, it is not yet understood to which extent the obtained interpretation results can be trusted and how accurate they can reflect the model decisions. Methods: We conduct studies to quantitatively evaluate seven different deep interpretation techniques across different models and datasets for EEG-based BCI. Results: The results reveal the importance of selecting a proper interpretation technique as the initial step. In addition, we also find that the quality of the interpretation results is inconsistent for individual samples despite when a method with an overall good performance is used. Many factors, including model structure and dataset types, could potentially affect the quality of the interpretation results. Discussion: Based on the observations, we propose a set of procedures that allow the interpretation results to be presented in an understandable and trusted way. We illustrate the usefulness of our method for EEG-based BCI with instances selected from different scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
179. A novel noninvasive brain–computer interface by imagining isometric force levels.
- Author
-
Hualiang, Li, Xupeng, Ye, Yuzhong, Liu, Tingjun, Xie, Wei, Tan, Yali, Shen, Qiru, Wang, Chaolin, Xiong, Yu, Wang, Weilin, Lin, and Long, Jinyi
- Abstract
Physiological circuits differ across increasing isometric force levels during unilateral contraction. Therefore, we first explored the possibility of predicting the force level based on electroencephalogram (EEG) activity recorded during a single trial of unilateral 5% or 40% of maximal isometric voluntary contraction (MVC) in right-hand grip imagination. Nine healthy subjects were involved in this study. The subjects were required to randomly perform 20 trials for each force level while imagining a right-hand grip. We proposed the use of common spatial patterns (CSPs) and coherence between EEG signals as features in a support vector machine for force level prediction. The results showed that the force levels could be predicted through single-trial EEGs while imagining the grip (mean accuracy = 81.4 ± 13.29%). Additionally, we tested the possibility of online control of a ball game using the above paradigm through unilateral grip imagination at different force levels (i.e., 5% of MVC imagination and 40% of MVC imagination for right-hand movement control). Subjects played the ball games effectively by controlling direction with our novel BCI system (n = 9, mean accuracy = 76.67 ± 9.35%). Data analysis validated the use of our BCI system in the online control of a ball game. This information may provide additional commands for the control of robots by users through combinations with other traditional brain–computer interfaces, e.g., different limb imaginations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
180. Coherence-based channel selection and Riemannian geometry features for magnetoencephalography decoding
- Author
-
Tang, Chao, Gao, Tianyi, Wang, Gang, and Chen, Badong
- Published
- 2024
- Full Text
- View/download PDF
181. Editorial: Recent advancements in brain-computer interfaces-based limb rehabilitation.
- Author
-
Lakshminarayanan, Kishor, Madathil, Deepa, Murari, Bhaskar Mohan, and Shah, Rakshit
- Subjects
MUSIC therapy ,MACHINE learning ,REHABILITATION technology ,FEATURE extraction ,VIRTUAL reality therapy ,EYE tracking ,BLINKING (Physiology) - Published
- 2024
- Full Text
- View/download PDF
182. Neuroscience Empowering Society: BCI Insights and Application
- Author
-
Harish S. Sinai Velingkar, Roopa Kulkarni, and Prashant Patavardhan
- Subjects
brainwaves ,brain–computer interface (BCI) ,neurological abnormalities ,biofeedback mechanisms ,societal betterment ,Engineering machinery, tools, and implements ,TA213-215 - Abstract
The study of brainwaves and brain–computer interfaces (BCIs) or brain–machine interfaces (BMIs) has emerged as a transformative field with the potential to revolutionize society’s well-being. This technical paper delves into the multifaceted domain of brainwave analysis and its integration with BCIs, presenting an approach that aims to enhance the fabric of society through various applications, with BCIs aiding in various assistive technologies, the detection of neurological abnormalities, and biofeedback mechanisms for improved concentration. This study explores the relationship between brainwave patterns and the levels of focus using EEG data. The results reveal distinct changes in brainwave activity, notably in the delta and beta frequency ranges, corresponding to different levels of cognitive engagement. Building upon these findings, we propose the development of a biofeedback-based concentration enhancement program for students. This study, using an approach equipped with real-time EEG monitoring and feedback mechanisms, aims to empower students to improve their concentration, particularly in educational settings. Such an innovative approach holds promise for enhancing academic performance and learning experiences, offering valuable insights into the optimization of cognitive functions through neurofeedback interventions.
- Published
- 2024
- Full Text
- View/download PDF
183. Comparison of Domain Selection Methods for Multi-Source Manifold Feature Transfer Learning in Electroencephalogram Classification
- Author
-
Rito Clifford Maswanganyi, Chungling Tu, Pius Adewale Owolawi, and Shengzhi Du
- Subjects
brain–computer interface (BCI) ,electroencephalogram (EEG) ,transfer learning (TL) ,negative transfer (NT) ,multi-source manifold feature transfer learning (MMFT) ,enhanced multi-class MMFT (EMC-MMFT) ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Transfer learning (TL) utilizes knowledge from the source domain (SD) to enhance the classification rate in the target domain (TD). It has been widely used to address the challenge of sessional and inter-subject variations in electroencephalogram (EEG)-based brain–computer interfaces (BCIs). However, utilizing knowledge from a combination of both related and non-related sources can significantly deteriorate the classification performance across individual target domains, resulting in a negative transfer (NT). Hence, NT becomes one of the most significant challenges for transfer learning algorithms. Notably, domain selection techniques have been developed to address the challenge of NT emerging from the transfer of knowledge from non-related sources. However, existing domain selection approaches iterate through domains and remove a single low-beneficial domain at a time, which can massively affect the classification performance in each iteration since SDs respond differently to other sources. In this paper, we compare domain selection techniques for a multi-source manifold feature transfer learning (MMFT) framework to address the challenge of NT and then evaluate the effect of beneficial and non-beneficial sources on TL performance. To evaluate the effect of low-beneficial and high beneficial sources on TL performance, some commonly used domain selection methods are compared, namely, domain transferability estimation (DTE), rank of domain (ROD), label similarity analysis, and enhanced multi-class MMFT (EMC-MMFT), using the same multi-class cross-session and cross-subject classification problems. The experimental results demonstrate the superiority of the EMC-MMFT algorithm in terms of minimizing the effect of NT. The highest classification accuracy (CA) of 100% is achieved when optimal combinations of high beneficial sources are selected for two-class SSMVEP problems, while the highest CAs of 98% and 87% are achieved for three- and four-class SSMVEP problems, respectively. The highest CA of 98% is achieved for two-class MI classification problems, while the highest CAs of 90% and 71.5% are obtained for three- and four-class MI problems, respectively.
- Published
- 2024
- Full Text
- View/download PDF
184. Methods for motion artifact reduction in online brain-computer interface experiments: a systematic review
- Author
-
Mathias Schmoigl-Tonis, Christoph Schranz, and Gernot R. Müller-Putz
- Subjects
brain-computer interface (BCI) ,electroencephalography (EEG) ,artifact removal ,motion artifact ,muscle artifact ,fasciculation ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Brain-computer interfaces (BCIs) have emerged as a promising technology for enhancing communication between the human brain and external devices. Electroencephalography (EEG) is particularly promising in this regard because it has high temporal resolution and can be easily worn on the head in everyday life. However, motion artifacts caused by muscle activity, fasciculation, cable swings, or magnetic induction pose significant challenges in real-world BCI applications. In this paper, we present a systematic review of methods for motion artifact reduction in online BCI experiments. Using the PRISMA filter method, we conducted a comprehensive literature search on PubMed, focusing on open access publications from 1966 to 2022. We evaluated 2,333 publications based on predefined filtering rules to identify existing methods and pipelines for motion artifact reduction in EEG data. We present a lookup table of all papers that passed the defined filters, all used methods, and pipelines and compare their overall performance and suitability for online BCI experiments. We summarize suitable methods, algorithms, and concepts for motion artifact reduction in online BCI applications, highlight potential research gaps, and discuss existing community consensus. This review aims to provide a comprehensive overview of the current state of the field and guide researchers in selecting appropriate methods for motion artifact reduction in online BCI experiments.
- Published
- 2023
- Full Text
- View/download PDF
185. Classifying human emotions in HRI: applying global optimization model to EEG brain signals
- Author
-
Mariacarla Staffa, Lorenzo D'Errico, Simone Sansalone, and Maryam Alimardani
- Subjects
human-robot interaction (HRI) ,EEG signals ,brain-computer interface (BCI) ,frontal brain asymmetry (FBA) ,Theory of Mind (ToM) ,Global Optimization Model (GOM) ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Significant efforts have been made in the past decade to humanize both the form and function of social robots to increase their acceptance among humans. To this end, social robots have recently been combined with brain-computer interface (BCI) systems in an attempt to give them an understanding of human mental states, particularly emotions. However, emotion recognition using BCIs poses several challenges, such as subjectivity of emotions, contextual dependency, and a lack of reliable neuro-metrics for real-time processing of emotions. Furthermore, the use of BCI systems introduces its own set of limitations, such as the bias-variance trade-off, dimensionality, and noise in the input data space. In this study, we sought to address some of these challenges by detecting human emotional states from EEG brain activity during human-robot interaction (HRI). EEG signals were collected from 10 participants who interacted with a Pepper robot that demonstrated either a positive or negative personality. Using emotion valence and arousal measures derived from frontal brain asymmetry (FBA), several machine learning models were trained to classify human's mental states in response to the robot personality. To improve classification accuracy, all proposed classifiers were subjected to a Global Optimization Model (GOM) based on feature selection and hyperparameter optimization techniques. The results showed that it is possible to classify a user's emotional responses to the robot's behavior from the EEG signals with an accuracy of up to 92%. The outcome of the current study contributes to the first level of the Theory of Mind (ToM) in Human-Robot Interaction, enabling robots to comprehend users' emotional responses and attribute mental states to them. Our work advances the field of social and assistive robotics by paving the way for the development of more empathetic and responsive HRI in the future.
- Published
- 2023
- Full Text
- View/download PDF
186. Editorial: Mind over brain, brain over mind: cognitive causes and consequences of controlling brain activity - volume II
- Author
-
Elisabeth V. C. Friedrich, Christa Neuper, and Reinhold Scherer
- Subjects
brain-computer interface (BCI) ,neurofeedback (NF) ,neuro-rehabilitation ,multisensory feedback ,cognitive improvement ,advances and challenges ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Published
- 2023
- Full Text
- View/download PDF
187. Editorial: Biosignal-based human–computer interfaces
- Author
-
Xin Zhang, Yuxuan Zhou, and Xianta Jiang
- Subjects
human-computer interface (HCI) ,brain-computer interface (BCI) ,biosignal ,biomaker ,multimodal ,Electronic computers. Computer science ,QA75.5-76.95 - Published
- 2023
- Full Text
- View/download PDF
188. LMDA-Net:A lightweight multi-dimensional attention network for general EEG-based brain-computer interfaces and interpretability
- Author
-
Zhengqing Miao, Meirong Zhao, Xin Zhang, and Dong Ming
- Subjects
Attention ,Brain-computer interface (BCI) ,Electroencephalography (EEG) ,Model interpretability ,Neural networks ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Electroencephalography (EEG)-based brain-computer interfaces (BCIs) pose a challenge for decoding due to their low spatial resolution and signal-to-noise ratio. Typically, EEG-based recognition of activities and states involves the use of prior neuroscience knowledge to generate quantitative EEG features, which may limit BCI performance. Although neural network-based methods can effectively extract features, they often encounter issues such as poor generalization across datasets, high predicting volatility, and low model interpretability. To address these limitations, we propose a novel lightweight multi-dimensional attention network, called LMDA-Net. By incorporating two novel attention modules designed specifically for EEG signals, the channel attention module and the depth attention module, LMDA-Net is able to effectively integrate features from multiple dimensions, resulting in improved classification performance across various BCI tasks. LMDA-Net was evaluated on four high-impact public datasets, including motor imagery (MI) and P300-Speller, and was compared with other representative models. The experimental results demonstrate that LMDA-Net outperforms other representative methods in terms of classification accuracy and predicting volatility, achieving the highest accuracy in all datasets within 300 training epochs. Ablation experiments further confirm the effectiveness of the channel attention module and the depth attention module. To facilitate an in-depth understanding of the features extracted by LMDA-Net, we propose class-specific neural network feature interpretability algorithms that are suitable for evoked responses and endogenous activities. By mapping the output of the specific layer of LMDA-Net to the time or spatial domain through class activation maps, the resulting feature visualizations can provide interpretable analysis and establish connections with EEG time-spatial analysis in neuroscience. In summary, LMDA-Net shows great potential as a general decoding model for various EEG tasks.
- Published
- 2023
- Full Text
- View/download PDF
189. Survey on the research direction of EEG-based signal processing.
- Author
-
Congzhong Sun and Chaozhou Mou
- Subjects
ARTIFICIAL neural networks ,SIGNAL processing ,DATA augmentation ,GENERATIVE adversarial networks ,MACHINE learning - Abstract
Electroencephalography (EEG) is increasingly important in Brain-Computer Interface (BCI) systems due to its portability and simplicity. In this paper, we provide a comprehensive review of research on EEG signal processing techniques since 2021, with a focus on preprocessing, feature extraction, and classification methods. We analyzed 61 research articles retrieved from academic search engines, including CNKI, PubMed, Nature, IEEE Xplore, and Science Direct. For preprocessing, we focus on innovatively proposed preprocessing methods, channel selection, and data augmentation. Data augmentation is classified into conventional methods (sliding windows, segmentation and recombination, and noise injection) and deep learning methods [Generative Adversarial Networks (GAN) and Variation AutoEncoder (VAE)]. We also pay attention to the application of deep learning, and multi-method fusion approaches, including both conventional algorithm fusion and fusion between conventional algorithms and deep learning. Our analysis identifies 35 (57.4%), 18 (29.5%), and 37 (60.7%) studies in the directions of preprocessing, feature extraction, and classification, respectively. We find that preprocessing methods have become widely used in EEG classification (96.7% of reviewed papers) and comparative experiments have been conducted in some studies to validate preprocessing. We also discussed the adoption of channel selection and data augmentation and concluded several mentionable matters about data augmentation. Furthermore, deep learning methods have shown great promise in EEG classification, with Convolutional Neural Networks (CNNs) being the main structure of deep neural networks (92.3% of deep learning papers). We summarize and analyze several innovative neural networks, including CNNs and multi-structure fusion. However, we also identified several problems and limitations of current deep learning techniques in EEG classification, including inappropriate input, low cross-subject accuracy, unbalanced between parameters and time costs, and a lack of interpretability. Finally, we highlight the emerging trend of multi-method fusion approaches (49.2% of reviewed papers) and analyze the data and some examples. We also provide insights into some challenges of multi-method fusion. Our review lays a foundation for future studies to improve EEG classification performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
190. Dataset Evaluation Method and Application for Performance Testing of SSVEP-BCI Decoding Algorithm.
- Author
-
Liang, Liyan, Zhang, Qian, Zhou, Jie, Li, Wenyu, and Gao, Xiaorong
- Subjects
- *
DECODING algorithms , *EVALUATION methodology , *VISUAL evoked potentials , *BRAIN-computer interfaces , *PHASE modulation , *TEST systems - Abstract
Steady-state visual evoked potential (SSVEP)-based brain–computer interface (BCI) systems have been extensively researched over the past two decades, and multiple sets of standard datasets have been published and widely used. However, there are differences in sample distribution and collection equipment across different datasets, and there is a lack of a unified evaluation method. Most new SSVEP decoding algorithms are tested based on self-collected data or offline performance verification using one or two previous datasets, which can lead to performance differences when used in actual application scenarios. To address these issues, this paper proposed a SSVEP dataset evaluation method and analyzed six datasets with frequency and phase modulation paradigms to form an SSVEP algorithm evaluation dataset system. Finally, based on the above datasets, performance tests were carried out on the four existing SSVEP decoding algorithms. The findings reveal that the performance of the same algorithm varies significantly when tested on diverse datasets. Substantial performance variations were observed among subjects, ranging from the best-performing to the worst-performing. The above results demonstrate that the SSVEP dataset evaluation method can integrate six datasets to form a SSVEP algorithm performance testing dataset system. This system can test and verify the SSVEP decoding algorithm from different perspectives such as different subjects, different environments, and different equipment, which is helpful for the research of new SSVEP decoding algorithms and has significant reference value for other BCI application fields. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
191. Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: a review.
- Author
-
Altaheri, Hamdi, Muhammad, Ghulam, Alsulaiman, Mansour, Amin, Syed Umar, Altuwaijri, Ghadir Ali, Abdul, Wadood, Bencherif, Mohamed A., and Faisal, Mohammed
- Subjects
- *
MOTOR imagery (Cognition) , *DEEP learning , *ELECTROENCEPHALOGRAPHY , *BRAIN-computer interfaces , *TECHNOLOGICAL innovations , *MOBILE robots , *CLASSIFICATION , *WAKEFULNESS - Abstract
The brain–computer interface (BCI) is an emerging technology that has the potential to revolutionize the world, with numerous applications ranging from healthcare to human augmentation. Electroencephalogram (EEG) motor imagery (MI) is among the most common BCI paradigms that have been used extensively in smart healthcare applications such as post-stroke rehabilitation and mobile assistive robots. In recent years, the contribution of deep learning (DL) has had a phenomenal impact on MI-EEG-based BCI. In this work, we systematically review the DL-based research for MI-EEG classification from the past ten years. This article first explains the procedure for selecting the studies and then gives an overview of BCI, EEG, and MI systems. The DL-based techniques applied in MI classification are then analyzed and discussed from four main perspectives: preprocessing, input formulation, deep learning architecture, and performance evaluation. In the discussion section, three major questions about DL-based MI classification are addressed: (1) Is preprocessing required for DL-based techniques? (2) What input formulations are best for DL-based techniques? (3) What are the current trends in DL-based techniques? Moreover, this work summarizes MI-EEG-based applications, extensively explores public MI-EEG datasets, and gives an overall visualization of the performance attained for each dataset based on the reviewed articles. Finally, current challenges and future directions are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
192. Overlapping filter bank convolutional neural network for multisubject multicategory motor imagery brain-computer interface.
- Author
-
Luo, Jing, Li, Jundong, Mao, Qi, Shi, Zhenghao, Liu, Haiqin, Ren, Xiaoyong, and Hei, Xinhong
- Subjects
- *
CONVOLUTIONAL neural networks , *MOTOR imagery (Cognition) , *FILTER banks , *BRAIN-computer interfaces , *FUSIFORM gyrus - Abstract
Background: Motor imagery brain-computer interfaces (BCIs) is a classic and potential BCI technology achieving brain computer integration. In motor imagery BCI, the operational frequency band of the EEG greatly affects the performance of motor imagery EEG recognition model. However, as most algorithms used a broad frequency band, the discrimination from multiple sub-bands were not fully utilized. Thus, using convolutional neural network (CNNs) to extract discriminative features from EEG signals of different frequency components is a promising method in multisubject EEG recognition. Methods: This paper presents a novel overlapping filter bank CNN to incorporate discriminative information from multiple frequency components in multisubject motor imagery recognition. Specifically, two overlapping filter banks with fixed low-cut frequency or sliding low-cut frequency are employed to obtain multiple frequency component representations of EEG signals. Then, multiple CNN models are trained separately. Finally, the output probabilities of multiple CNN models are integrated to determine the predicted EEG label. Results: Experiments were conducted based on four popular CNN backbone models and three public datasets. And the results showed that the overlapping filter bank CNN was efficient and universal in improving multisubject motor imagery BCI performance. Specifically, compared with the original backbone model, the proposed method can improve the average accuracy by 3.69 percentage points, F1 score by 0.04, and AUC by 0.03. In addition, the proposed method performed best among the comparison with the state-of-the-art methods. Conclusion: The proposed overlapping filter bank CNN framework with fixed low-cut frequency is an efficient and universal method to improve the performance of multisubject motor imagery BCI. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
193. EEG-based emergency braking intention detection during simulated driving.
- Author
-
Liang, Xinbin, Yu, Yang, Liu, Yadong, Liu, Kaixuan, Liu, Yaru, and Zhou, Zongtan
- Subjects
- *
ELECTROENCEPHALOGRAPHY , *RECEIVER operating characteristic curves , *CLASSIFICATION algorithms , *WAKEFULNESS - Abstract
Background: Current research related to electroencephalogram (EEG)-based driver's emergency braking intention detection focuses on recognizing emergency braking from normal driving, with little attention to differentiating emergency braking from normal braking. Moreover, the classification algorithms used are mainly traditional machine learning methods, and the inputs to the algorithms are manually extracted features. Methods: To this end, a novel EEG-based driver's emergency braking intention detection strategy is proposed in this paper. The experiment was conducted on a simulated driving platform with three different scenarios: normal driving, normal braking and emergency braking. We compared and analyzed the EEG feature maps of the two braking modes, and explored the use of traditional methods, Riemannian geometry-based methods, and deep learning-based methods to predict the emergency braking intention, all using the raw EEG signals rather than manually extracted features as input. Results: We recruited 10 subjects for the experiment and used the area under the receiver operating characteristic curve (AUC) and F1 score as evaluation metrics. The results showed that both the Riemannian geometry-based method and the deep learning-based method outperform the traditional method. At 200 ms before the start of real braking, the AUC and F1 score of the deep learning-based EEGNet algorithm were 0.94 and 0.65 for emergency braking vs. normal driving, and 0.91 and 0.85 for emergency braking vs. normal braking, respectively. The EEG feature maps also showed a significant difference between emergency braking and normal braking. Overall, based on EEG signals, it was feasible to detect emergency braking from normal driving and normal braking. Conclusions: The study provides a user-centered framework for human–vehicle co-driving. If the driver's intention to brake in an emergency can be accurately identified, the vehicle's automatic braking system can be activated hundreds of milliseconds earlier than the driver's real braking action, potentially avoiding some serious collisions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
194. Classification of lower limb motor imagery based on iterative EEG source localization and feature fusion.
- Author
-
Peng, Xiaobo, Liu, Junhong, Huang, Ying, Mao, Yanhao, and Li, Dong
- Subjects
- *
MOTOR imagery (Cognition) , *ELECTROENCEPHALOGRAPHY , *FEATURE extraction , *PARTICLE swarm optimization , *BRAIN-computer interfaces , *SUPPORT vector machines , *WAKEFULNESS - Abstract
Motor imagery (MI) brain–computer interface (BCI) systems have broad application prospects in rehabilitation and other fields. However, to achieve accurate and practical MI-BCI applications, there are still several critical issues, such as channel selection, electroencephalogram (EEG) feature extraction and EEG classification, needed to be better resolved. In this paper, these issues are studied for lower limb MI which is more difficult and less studied than upper limb MI. First, a novel iterative EEG source localization method is proposed for channel selection. Channels FC1, FC2, C1, C2 and Cz, instead of the commonly used traditional channel set (TCS) C3, C4 and Cz, are selected as the optimal channel set (OCS). Then, a multi-domain feature (MDF) extraction algorithm is presented to fuse single-domain features into multi-domain features. Finally, a particle swarm optimization based support vector machine (SVM) method is utilized to classify the EEG data collected by the lower limb MI experiment designed by us. The results show that the classification accuracy is 88.43%, 3.35–5.41% higher than those of using traditional SVM to classify single-domain features on the TCS, which proves that the combination of OCS and MDF can not only reduce the amount of data processing, but also retain more feature information to improve the accuracy of EEG classification. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
195. An olfactory-based Brain-Computer Interface: electroencephalography changes during odor perception and discrimination.
- Author
-
Morozova, Marina, Bikbavova, Alsu, Bulanov, Vladimir, and Lebedev, Mikhail A.
- Subjects
SMELL ,OLFACTORY perception ,ELECTROENCEPHALOGRAPHY ,BRAIN-computer interfaces ,THETA rhythm ,MILD cognitive impairment ,CENTRAL nervous system - Abstract
Brain-Computer Interfaces (BCIs) are devices designed for establishing communication between the central nervous system and a computer. The communication can occur through different sensory modalities, and most commonly visual and auditory modalities are used. Here we propose that BCIs can be expanded by the incorporation of olfaction and discuss the potential applications of such olfactory BCIs. To substantiate this idea, we present results from two olfactory tasks: one that required attentive perception of odors without any overt report, and the second one where participants discriminated consecutively presented odors. In these experiments, EEG recordings were conducted in healthy participants while they performed the tasks guided by computer-generated verbal instructions. We emphasize the importance of relating EEG modulations to the breath cycle to improve the performance of an olfactory-based BCI. Furthermore, theta-activity could be used for olfactory-BCI decoding. In our experiments, we observed modulations of theta activity over the frontal EEG leads approximately 2 s after the inhalation of an odor. Overall, frontal theta rhythms and other types of EEG activity could be incorporated in the olfactory-based BCIs which utilize odors either as inputs or outputs. These BCIs could improve olfactory training required for conditions like anosmia and hyposmia, and mild cognitive impairment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
196. A garment that measures brain activity: proof of concept of an EEG sensor layer fully implemented with smart textiles.
- Author
-
López-Larraz, Eduardo, Escolano, Carlos, Robledo-Menéndez, Almudena, Morlas, Leyre, Alda, Alexandra, and Minguez, Javier
- Subjects
ELECTROTEXTILES ,ELECTROENCEPHALOGRAPHY ,PROOF of concept ,BRAIN-computer interfaces ,CLOTHING & dress ,SKIN - Abstract
This paper presents the first garment capable of measuring brain activity with accuracy comparable to that of state-of-the art dry electroencephalogram (EEG) systems. The main innovation is an EEG sensor layer (i.e., the electrodes, the signal transmission, and the cap support) made entirely of threads, fabrics, and smart textiles, eliminating the need for metal or plastic materials. The garment is connected to a mobile EEG amplifier to complete the measurement system. As a first proof of concept, the new EEG system (Garment-EEG) was characterized with respect to a state-of-the-art Ag/AgCl dry-EEG system (Dry-EEG) over the forehead area of healthy participants in terms of: (1) skin-electrode impedance; (2) EEG activity; (3) artifacts; and (4) user ergonomics and comfort. The results show that the Garment-EEG system provides comparable recordings to Dry-EEG, but it is more susceptible to artifacts under adverse recording conditions due to poorer contact impedances. The textile-based sensor layer offers superior ergonomics and comfort compared to its metal-based counterpart. We provide the datasets recorded with Garment-EEG and Dry-EEG systems, making available the first open-access dataset of an EEG sensor layer built exclusively with textile materials. Achieving user acceptance is an obstacle in the field of neurotechnology. The introduction of EEG systems encapsulated in wearables has the potential to democratize neurotechnology and non-invasive brain-computer interfaces, as they are naturally accepted by people in their daily lives. Furthermore, supporting the EEG implementation in the textile industry may result in lower cost and less-polluting manufacturing processes compared to metal and plastic industries. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
197. Hyper-parameter tuning and feature extraction for asynchronous action detection from sub-thalamic nucleus local field potentials.
- Author
-
Martineau, Thomas, Shenghong He, Vaidyanathan, Ravi, and Huiling Tan
- Subjects
FEATURE extraction ,DEEP brain stimulation ,HUMAN-machine systems ,BRAIN-computer interfaces ,PARKINSON'S disease - Abstract
Introduction: Decoding brain states from subcortical local field potentials (LFPs) indicative of activities such as voluntary movement, tremor, or sleep stages, holds significant potential in treating neurodegenerative disorders and offers new paradigms in brain-computer interface (BCI). Identified states can serve as control signals in coupled human-machine systems, e.g., to regulate deep brain stimulation (DBS) therapy or control prosthetic limbs. However, the behavior, performance, and efficiency of LFP decoders depend on an array of design and calibration settings encapsulated into a single set of hyper-parameters. Although methods exist to tune hyper-parameters automatically, decoders are typically found through exhaustive trial-and-error, manual search, and intuitive experience. Methods: This study introduces a Bayesian optimization (BO) approach to hyper-parameter tuning, applicable through feature extraction, channel selection, classification, and stage transition stages of the entire decoding pipeline. The optimization method is compared with five real-time feature extraction methods paired with four classifiers to decode voluntary movement asynchronously based on LFPs recorded with DBS electrodes implanted in the subthalamic nucleus of Parkinson’s disease patients. Results: Detection performance, measured as the geometric mean between classifier specificity and sensitivity, is automatically optimized. BO demonstrates improved decoding performance from initial parameter setting across all methods. The best decoders achieve a maximum performance of 0.74 ± 0.06 (mean ± SD across all participants) sensitivity-specificity geometric mean. In addition, parameter relevance is determined using the BO surrogate models. Discussion: Hyper-parameters tend to be sub-optimally fixed across different users rather than individually adjusted or even specifically set for a decoding task. The relevance of each parameter to the optimization problem and comparisons between algorithms can also be difficult to track with the evolution of the decoding problem. We believe that the proposed decoding pipeline and BO approach is a promising solution to such challenges surrounding hyper-parameter tuning and that the study’s findings can inform future design iterations of neural decoders for adaptive DBS and BCI. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
198. Evaluation of Single-Trial Classification to Control a Visual ERP-BCI under a Situation Awareness Scenario.
- Author
-
Fernández-Rodríguez, Álvaro, Ron-Angevin, Ricardo, Velasco-Álvarez, Francisco, Diaz-Pineda, Jaime, Letouzé, Théodore, and André, Jean-Marc
- Subjects
- *
SITUATIONAL awareness , *AIR traffic controllers , *BRAIN-computer interfaces , *EVOKED potentials (Electrophysiology) - Abstract
An event-related potential (ERP)-based brain–computer interface (BCI) can be used to monitor a user's cognitive state during a surveillance task in a situational awareness context. The present study explores the use of an ERP-BCI for detecting new planes in an air traffic controller (ATC). Two experiments were conducted to evaluate the impact of different visual factors on target detection. Experiment 1 validated the type of stimulus used and the effect of not knowing its appearance location in an ERP-BCI scenario. Experiment 2 evaluated the effect of the size of the target stimulus appearance area and the stimulus salience in an ATC scenario. The main results demonstrate that the size of the plane appearance area had a negative impact on the detection performance and on the amplitude of the P300 component. Future studies should address this issue to improve the performance of an ATC in stimulus detection using an ERP-BCI. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
199. Decoding Multi-Class Motor Imagery and Motor Execution Tasks Using Riemannian Geometry Algorithms on Large EEG Datasets.
- Author
-
Shuqfa, Zaid, Belkacem, Abdelkader Nasreddine, and Lakas, Abderrahmane
- Subjects
- *
RIEMANNIAN geometry , *MOTOR imagery (Cognition) , *DECODING algorithms , *BRAIN-computer interfaces , *ELECTROENCEPHALOGRAPHY , *ALGORITHMS , *WAKEFULNESS - Abstract
The use of Riemannian geometry decoding algorithms in classifying electroencephalography-based motor-imagery brain–computer interfaces (BCIs) trials is relatively new and promises to outperform the current state-of-the-art methods by overcoming the noise and nonstationarity of electroencephalography signals. However, the related literature shows high classification accuracy on only relatively small BCI datasets. The aim of this paper is to provide a study of the performance of a novel implementation of the Riemannian geometry decoding algorithm using large BCI datasets. In this study, we apply several Riemannian geometry decoding algorithms on a large offline dataset using four adaptation strategies: baseline, rebias, supervised, and unsupervised. Each of these adaptation strategies is applied in motor execution and motor imagery for both scenarios 64 electrodes and 29 electrodes. The dataset is composed of four-class bilateral and unilateral motor imagery and motor execution of 109 subjects. We run several classification experiments and the results show that the best classification accuracy is obtained for the scenario where the baseline minimum distance to Riemannian mean has been used. The mean accuracy values up to 81.5% for motor execution, and up to 76.4% for motor imagery. The accurate classification of EEG trials helps to realize successful BCI applications that allow effective control of devices. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
200. Imagined Speech Classification Using EEG and Deep Learning.
- Author
-
Abdulghani, Mokhles M., Walters, Wilbur L., and Abed, Khalid H.
- Subjects
- *
DEEP learning , *MACHINE learning , *PATTERN recognition systems , *ELECTROENCEPHALOGRAPHY , *BRAIN waves , *RECURRENT neural networks , *WAKEFULNESS - Abstract
In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The long-short term memory recurrent neural network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: up, down, left, and right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration was implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based brain–computer interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50%, and 92.62% for precision, recall, and F1-score, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.