1. Imagined speech can be decoded from low- and cross-frequency intracranial EEG features
- Author
-
Timothée Proix, Jaime Delgado Saa, Andy Christen, Stephanie Martin, Brian N. Pasley, Robert T. Knight, Xing Tian, David Poeppel, Werner K. Doyle, Orrin Devinsky, Luc H. Arnal, Pierre Mégevand, Anne-Lise Giraud, Department of Basic Neurosciences, Université de Genève = University of Geneva (UNIGE), University of California [Berkeley] (UC Berkeley), University of California (UC), New York University [Shanghai], NYU System (NYU), East China Normal University [Shangaï] (ECNU), Ernst Strüngmann Institute for Neuroscience, New York University [New York] (NYU), New York University School of Medicine (NYU Grossman School of Medicine), Institut de l'Audition [Paris] (IDA), Institut Pasteur [Paris] (IP)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Paris Cité (UPCité), and This work was funded by EU FET-BrainCom project (A.G.), NCCR Evolving Language, Swiss National Science Foundation Agreement #51NF40_180888 (A.G.), NINDS R3723115 (R.T.K.), Swiss National Science Foundation project grant 163040 (A.G.), National Natural Science Foundation of China 32071099 (X.T.), Natural Science Foundation of Shanghai 20ZR1472100 (X.T.), Program of Introducing Talents of Discipline to Universities, Base B16018 (X.T.), NYU Shanghai Boost Fund (X.T.), Fondation Pour l’Audition FPA RD-2020-10 (L.A.), Swiss National Science Foundation career grant 167836 (P.M.), and Swiss National Science Foundation career grant 193542 (T.P.).
- Subjects
Adult ,Male ,Science ,General Physics and Astronomy ,Article ,General Biochemistry, Genetics and Molecular Biology ,Young Adult ,Phonetics ,otorhinolaryngologic diseases ,Humans ,Speech ,Electrodes ,Language ,Brain Mapping ,Multidisciplinary ,[SCCO.NEUR]Cognitive science/Neuroscience ,Brain ,General Chemistry ,Middle Aged ,ddc:616.8 ,ComputingMethodologies_PATTERNRECOGNITION ,Brain-Computer Interfaces ,Imagination ,Female ,Electrocorticography ,Neuroscience - Abstract
Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding., Reconstructing imagined speech from neural activity holds great promises for people with severe speech production deficits. Here, the authors demonstrate using human intracranial recordings that both low- and higher-frequency power and local cross-frequency contribute to imagined speech decoding.
- Published
- 2022
- Full Text
- View/download PDF