19 results on '"Imagined speech"'
Search Results
2. Imagined Speech Recognition in a Subject Independent Approach Using a Prototypical Network
- Author
-
Hernandez-Galvan, Alan, Ramirez-Alonso, Graciela, Camarillo-Cisneros, Javier, Samano-Lira, Gabriela, Ramirez-Quintana, Juan, Magjarevic, Ratko, Series Editor, Ładyżyński, Piotr, Associate Editor, Ibrahim, Fatimah, Associate Editor, Lackovic, Igor, Associate Editor, Rock, Emilio Sacristan, Associate Editor, Trujillo-Romero, Citlalli Jessica, editor, Gonzalez-Landaeta, Rafael, editor, Chapa-González, Christian, editor, Dorantes-Méndez, Guadalupe, editor, Flores, Dora-Luz, editor, Flores Cuautle, J. J. Agustin, editor, Ortiz-Posadas, Martha R., editor, Salido Ruiz, Ricardo A., editor, and Zuñiga-Aguilar, Esmeralda, editor
- Published
- 2023
- Full Text
- View/download PDF
3. Significance of Dimensionality Reduction in CNN-Based Vowel Classification from Imagined Speech Using Electroencephalogram Signals
- Author
-
Banerjee, Oindrila, Govind, D., Dubey, Akhilesh Kumar, Gangashetty, Suryakanth V., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Prasanna, S. R. Mahadeva, editor, Karpov, Alexey, editor, Samudravijaya, K., editor, and Agrawal, Shyam S., editor
- Published
- 2022
- Full Text
- View/download PDF
4. Multi-view Learning for EEG Signal Classification of Imagined Speech
- Author
-
Barajas Montiel, Sandra Eugenia, Morales, Eduardo F., Escalante, Hugo Jair, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Vergara-Villegas, Osslan Osiris, editor, Cruz-Sánchez, Vianey Guadalupe, editor, Sossa-Azuela, Juan Humberto, editor, Carrasco-Ochoa, Jesús Ariel, editor, Martínez-Trinidad, José Francisco, editor, and Olvera-López, José Arturo, editor
- Published
- 2022
- Full Text
- View/download PDF
5. Spectro-Spatio-Temporal EEG Representation Learning for Imagined Speech Recognition
- Author
-
Ko, Wonjun, Jeon, Eunjin, Suk, Heung-Il, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wallraven, Christian, editor, Liu, Qingshan, editor, and Nagahara, Hajime, editor
- Published
- 2022
- Full Text
- View/download PDF
6. EEG Representations of Spatial and Temporal Features in Imagined Speech and Overt Speech
- Author
-
Lee, Seo-Hyun, Lee, Minji, Lee, Seong-Whan, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Palaiahnakote, Shivakumara, editor, Sanniti di Baja, Gabriella, editor, Wang, Liang, editor, and Yan, Wei Qi, editor
- Published
- 2020
- Full Text
- View/download PDF
7. Individual Word Classification During Imagined Speech Using Intracranial Recordings
- Author
-
Martin, Stephanie, Iturrate, Iñaki, Brunner, Peter, del R. Millán, José, Schalk, Gerwin, Knight, Robert T., Pasley, Brian N., Gan, Woon-Seng, Series Editor, Kuo, C.-C. Jay, Series Editor, Zheng, Thomas Fang, Series Editor, Barni, Mauro, Series Editor, Guger, Christoph, editor, Mrachacz-Kersting, Natalie, editor, and Allison, Brendan Z., editor
- Published
- 2019
- Full Text
- View/download PDF
8. EEG-Based Subjects Identification Based on Biometrics of Imagined Speech Using EMD
- Author
-
Moctezuma, Luis Alfredo, Molinas, Marta, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Wang, Shouyi, editor, Yamamoto, Vicky, editor, Su, Jianzhong, editor, Yang, Yang, editor, Jones, Erick, editor, Iasemidis, Leon, editor, and Mitchell, Tom, editor
- Published
- 2018
- Full Text
- View/download PDF
9. Tensor Decomposition for Imagined Speech Discrimination in EEG
- Author
-
García-Salinas, Jesús S., Villaseñor-Pineda, Luis, Reyes-García, Carlos Alberto, Torres-García, Alejandro, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Batyrshin, Ildar, editor, Martínez-Villaseñor, María de Lourdes, editor, and Ponce Espinosa, Hiram Eredín, editor
- Published
- 2018
- Full Text
- View/download PDF
10. EEG Based Brain Computer Interface for Speech Communication: Principles and Applications
- Author
-
Mohanchandra, Kusuma, Saha, Snehanshu, Lingaraju, G. M., Kacprzyk, Janusz, Series editor, Jain, Lakhmi C., Series editor, Hassanien, Aboul Ella, editor, and Azar, Ahmad Taher, editor
- Published
- 2015
- Full Text
- View/download PDF
11. Role of Brain-Computer Technology in Synthetic Telepathy
- Author
-
Krzysztof Hanczak
- Subjects
Entertainment ,medicine.diagnostic_test ,Imagined speech ,Human–computer interaction ,Computer science ,medicine ,Swarm behaviour ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Electroencephalography ,Drone ,Field (computer science) ,Brain–computer interface ,Computer technology - Abstract
The objective of this work is to explore the potential use of electroencephalography (EEG) as a means for silent communication by way of decoding imagined speech from measured electrical brain waves. Communication using BCI can be used for medical and non-medical application. Worldwide a large number of people suffer from disabilities which impair normal communication. Communication BCI’s are excellent tool which helps the affected patients communicate with others. BCI technology is used can be used in many different areas of life, beginning on most important field which is help people, through improving or making life easier and ending with entertainment. Also BCI technology can be used in military in different aspects, it may be used to monitor a soldier's cognitive workload, control a drone swarm, or link with a prosthetic, among other examples.
- Published
- 2021
- Full Text
- View/download PDF
12. EEG Vowel Silent Speech Signal Discrimination Based on APIT-EMD and SVD
- Author
-
J. Bacca, J. Caballero, L. C. Sarmiento, Omar López, and Sergio Villamizar
- Subjects
medicine.diagnostic_test ,Computer science ,Imagined speech ,Vowel ,Speech recognition ,Singular value decomposition ,medicine ,Pairwise comparison ,AdaBoost ,Electroencephalography ,Classifier (UML) ,Brain–computer interface - Abstract
A Brain-Computer Interface (BCI) System captures the neural activity of the Central Nervous System (CNS) and delivers an output which replaces the natural output of the CNS [1]. That helps who have lost their ability to speak, spelling words in a monitor or helps to recover the movements for people who have suffered some amputation of their limbs or a motor paralysis of their body. The main objective of the project is to control an upper limb using a Myohand twin Ottobock prosthesis ref 8E38 = 7 [2] using EEG silent speech signals. On this research, we will focus on a novel methodology that attempts to classify imagined speech based on vowels, which uses as the primary technique for artifact removal the Adaptive-Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition (APIT-MEMD) [3], and for the feature generation stage the Singular Value Decomposition (SVD) [4]. For the classification stage, two classifiers where tested, the Extremely Randomized Trees classifier (ET) [5], and the Adaboost (ADB) [6]. The overall accuracy achieved per subject and per vowels’ pairwise classification, was 91.54% using ET. For a multiclass classifier, the overall accuracy over the eighteen subjects of the database was 79.06%.
- Published
- 2020
- Full Text
- View/download PDF
13. Classification of Phonemes Using EEG
- Author
-
R. Aiswarya Priyanka and G. Sudha Sadasivam
- Subjects
Audio signal ,medicine.diagnostic_test ,business.industry ,Imagined speech ,Computer science ,Speech recognition ,Deep learning ,Speech synthesis ,Phonology ,Electroencephalography ,computer.software_genre ,030218 nuclear medicine & medical imaging ,Support vector machine ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Artificial intelligence ,business ,computer ,030217 neurology & neurosurgery ,Brain–computer interface - Abstract
Artificial speech synthesis can be done using electroencephalography (EEG) and electrocorticography (ECoG) for the brain–computer interface (BCI). This paper focuses on using EEG to classify phonological categories. Although literature is available on the identification and classification of phoneme information in the electroencephalography signals, the classification accuracy of some phonological categories is high, while that of others is too low. Thus, this chapter focuses on identifying the correlation between imagined EEG and audio signals to select the appropriate EEG features. It also identifies the EEG channels that are best suited for imagined speech. Once features are selected, phonemes are classified as vowels and consonants using a support vector machine. Experimental results suggest good accuracy when using 49 features that correlated with audio signals.
- Published
- 2020
- Full Text
- View/download PDF
14. EEG Representations of Spatial and Temporal Features in Imagined Speech and Overt Speech
- Author
-
Seong-Whan Lee, Seo Hyun Lee, and Minji Lee
- Subjects
medicine.diagnostic_test ,Imagined speech ,Computer science ,business.industry ,Speech recognition ,0206 medical engineering ,Question mark ,02 engineering and technology ,Electroencephalography ,020601 biomedical engineering ,Key (music) ,03 medical and health sciences ,0302 clinical medicine ,Feature (machine learning) ,medicine ,Artificial intelligence ,Control (linguistics) ,business ,030217 neurology & neurosurgery ,Decoding methods ,Brain–computer interface - Abstract
Imagined speech is an emerging paradigm for intuitive control of the brain-computer interface based communication system. Although the decoding performance of the imagined speech is improving with actively proposed architectures, the fundamental question about ‘what component are they decoding?’ is still remaining as a question mark. Considering that the imagined speech refers to an internal mechanism of producing speech, it may naturally resemble the distinct features of the overt speech. In this paper, we investigate the close relation of the spatial and temporal features between imagined speech and overt speech using electroencephalography signals. Based on the common spatial pattern feature, we acquired 16.2% and 59.9% of averaged thirteen-class classification accuracy (chance rate = 7.7%) for imagined speech and overt speech, respectively. Although the overt speech showed significantly higher classification performance compared to the imagined speech, we found potentially similar common spatial pattern of the identical classes of imagined speech and overt speech. Furthermore, in the temporal feature, we examined the analogous grand averaged potentials of the highly distinguished classes in the two speech paradigms. Specifically, the correlation of the amplitude between the imagined speech and the overt speech was 0.71 in the class with the highest true positive rate. The similar spatial and temporal features of the two paradigms may provide a key to the bottom-up decoding of imagined speech, implying the possibility of robust classification of multiclass imagined speech. It could be a milestone to comprehensive decoding of the speech-related paradigms, considering their underlying patterns.
- Published
- 2020
- Full Text
- View/download PDF
15. Individual Word Classification During Imagined Speech Using Intracranial Recordings
- Author
-
José del R. Millán, Brian N. Pasley, Gerwin Schalk, Rob Knight, Iñaki Iturrate, Peter Brunner, and Stephanie Martin
- Subjects
Cued speech ,Superior temporal gyrus ,Discriminative model ,Imagined speech ,Computer science ,Speech recognition ,Perception ,media_common.quotation_subject ,Inferior frontal gyrus ,Active listening ,Speech processing ,media_common - Abstract
In this study, we evaluated the ability to identify individual words in a binary word classification task during imagined speech, using high frequency activity (HFA; 70–150 Hz) features in the time domain. For this, we used an imagined word repetition task cued with a word perception stimulus, and followed by an overt word repetition, and compared the results across the three conditions. We used support-vector machines, and introduced a non-linear time-realignment in the classification framework—in order to deal with speech temporal irregularities. As expected, high classification accuracy was obtained in the listening (mean = 89%) and overt speech conditions (mean = 86%), where speech stimuli were directly observed. In the imagined speech condition, where speech is generated internally by the patient, results show for the first time that individual words in single trials were classified with statistically significant accuracy. Classification accuracy reached 88% in a two-class classification framework, and average classification accuracy across fifteen word-pairs was significant across five subjects (mean = 58%). The majority of electrodes carrying discriminative information were located in the superior temporal gyrus, inferior frontal gyrus and sensorimotor cortex, regions commonly associated with speech processing. These data represent a proof of concept study for basic decoding of speech imagery, and delineate a number of key challenges to usage of speech imagery neural representations for clinical applications.
- Published
- 2019
- Full Text
- View/download PDF
16. Tensor Decomposition for Imagined Speech Discrimination in EEG
- Author
-
Jesús S. García-Salinas, Luis Villaseñor-Pineda, Carlos A. Reyes-García, and Alejandro Torres-Garcia
- Subjects
Multivariate analysis ,Relation (database) ,medicine.diagnostic_test ,Computer science ,Imagined speech ,Speech recognition ,020206 networking & telecommunications ,02 engineering and technology ,Electroencephalography ,Standard deviation ,Motor imagery ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Brain–computer interface - Abstract
Most of the researches in Electroencephalogram(EEG)-based Brain-Computer Interfaces (BCI) are focused on the use of motor imagery. As an attempt to improve the control of these interfaces, the use of language instead of movement has been recently explored, in the form of imagined speech. This work aims for the discrimination of imagined words in electroencephalogram signals. For this purpose, the analysis of multiple variables of the signal and their relation is considered by means of a multivariate data analysis, i.e., Parallel Factor Analysis (PARAFAC). In previous works, this method has demonstrated to be useful for EEG analysis. Nevertheless, to the best of our knowledge, this is the first attempt to analyze imagined speech signals using this approach. In addition, a novel use of the extracted PARAFAC components is proposed in order to improve the discrimination of the imagined words. The obtained results, besides of higher accuracy rates in comparison with related works, showed lower standard deviation among subjects suggesting the effectiveness and robustness of the proposed method. These results encourage the use of multivariate analysis for BCI applications in combination with imagined speech signals.
- Published
- 2018
- Full Text
- View/download PDF
17. EEG-Based Subjects Identification Based on Biometrics of Imagined Speech Using EMD
- Author
-
Luis Alfredo Moctezuma and Marta Molinas
- Subjects
Biometrics ,Channel (digital image) ,medicine.diagnostic_test ,business.industry ,Imagined speech ,Computer science ,Minkowski distance ,Pattern recognition ,Electroencephalography ,Random forest ,Support vector machine ,Naive Bayes classifier ,medicine ,Artificial intelligence ,business - Abstract
When brain activity ions, the potential for human capacities augmentation is promising. In this paper, EMD is used to decompose EEG signals during Imagined Speech in order to use it as a biometric marker for creating a Biometric Recognition System. For each EEG channel, the most relevant Intrinsic Mode Functions (IMFs) are decided based on the Minkowski distance, and for each IMF 4 features are computed: Instantaneous and Teager energy distribution and Higuchi and Petrosian Fractal Dimension. To test the proposed method, a dataset with 20 Subjects who imagined 30 repetitions of 5 words in Spanish, is used. Four classifiers are used for this task - random forest, SVM, naive Bayes, and k-NN - and their performances are compared. The accuracy obtained (up to 0.92 using Linear SVM) after 10-folds cross-validation suggest that the proposed method based on EMD can be valuable for creating EEG-based biometrics of imagined speech for Subject identification.
- Published
- 2018
- Full Text
- View/download PDF
18. Towards Continuous Speech Recognition for BCI
- Author
-
Tanja Schultz, Gerwin Schalk, Christian Herff, Peter Brunner, Adriana de Pesters, and Dominic Heger
- Subjects
Speech production ,Speech perception ,Imagined speech ,Computer science ,business.industry ,Interface (computing) ,Speech recognition ,0206 medical engineering ,Usability ,02 engineering and technology ,020601 biomedical engineering ,Spelling ,03 medical and health sciences ,0302 clinical medicine ,business ,030217 neurology & neurosurgery ,Natural communication ,Brain–computer interface - Abstract
For the last two decades, brain-computer interface (BCI) research has worked towards practical and useful applications for communication and control. Yet, many BCI communication approaches suffer from unnatural interaction or time-consuming user training. As continuous speech provides a very natural communication approach, it has been a long standing question whether it is possible to develop BCIs that perform speech recognition from cortical activity. Imagined speech as a BCI paradigm for locked-in patients would mean a large improvement in communication speed and usability without the need for cumbersome spelling using individual letters. We showed for the first time that automatic speech recognition from neural signals is possible. Here, we evaluate the feasibility of speech recognition from neural signals using only temporal offsets associated with speech production and omitting information from speech perception. This analysis provides first insights into the potential usage of imagined speech processes for speech recognition, for which no perceptive activity is present.
- Published
- 2017
- Full Text
- View/download PDF
19. EEG Based Brain Computer Interface for Speech Communication: Principles and Applications
- Author
-
Kusuma Mohanchandra, Snehanshu Saha, and G. M. Lingaraju
- Subjects
Speech production ,education.field_of_study ,medicine.diagnostic_test ,Computer science ,Imagined speech ,Speech recognition ,Population ,Electroencephalography ,Control channel ,Direct speech ,medicine ,education ,Brain–computer interface ,Computer technology - Abstract
EEG based brain computer interface has emerged as a hot spot in the study of neuroscience, machine learning and rehabilitation in the recent years. A BCI provides a platform for direct communication between a human brain and a computer without the normal neurophysiology pathways. The electrical signals in the brain, because of their fast response to cognitive processes are most suitable as non-motor controlled mediation between the human and a computer. It can serve as a communication and control channel for different applications. Though the primary goal is to restore communication in severely paralyzed population, the BCI for speech communication fetches recognition in a variety of non-medical fields, the silent speech communication, cognitive biometrics and synthetic telepathy to name a few. A survey of diverse applications and principles of the BCI technology used for speech communication is discussed in this chapter. An ample evidence of speech communication used by “locked-in” patients is specified. Through the aid of assistive computer technology, they were able to pen their memoir. The current state-of-the-art techniques and control signals used for speech communication is described in brief. Possible future research directions are discussed. A comparison of indirect and direct methods of BCI speech production is shown. The direct method involves capturing the brain signals of the intended speech or speech imagery, processes the signals to predict the speech and synthesizes the speech production in real-time. There is enough evidence that the direct speech prediction from the neurological signals is a promising technology with fruitful results and challenging issues.
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.