Back to Search Start Over

Vowel Sound Synthesis from Electroencephalography during Listening and Recalling

Authors :
Yousuke Ogata
Yasuharu Koike
Ludovico Minati
Natsue Yoshimura
Wataru Akashi
Hiroyuki Kambara
Source :
Advanced Intelligent Systems, Vol 3, Iss 2, Pp n/a-n/a (2021)
Publication Year :
2021
Publisher :
Wiley, 2021.

Abstract

Recent advances in brain imaging technology have furthered our knowledge of the neural basis of auditory and speech processing, often via contributions from invasive brain signal recording and stimulation studies conducted intraoperatively. Herein, an approach for synthesizing vowel sounds straightforwardly from scalp‐recorded electroencephalography (EEG), a noninvasive neurophysiological recording method is demonstrated. Given cortical current signals derived from the EEG acquired while human participants listen to and recall (i.e., imagined) two vowels, /a/ and /i/, sound parameters are estimated by a convolutional neural network (CNN). The speech synthesized from the estimated parameters is sufficiently natural to achieve recognition rates >85% during a subsequent sound discrimination task. Notably, the CNN identifies the involvement of the brain areas mediating the “what” auditory stream, namely the superior, middle temporal, and Heschl's gyri, demonstrating the efficacy of the computational method in extracting auditory‐related information from neuroelectrical activity. Differences in cortical sound representation between listening versus recalling are further revealed, such that the fusiform, calcarine, and anterior cingulate gyri contributes during listening, whereas the inferior occipital gyrus is engaged during recollection. The proposed approach can expand the scope of EEG in decoding auditory perception that requires high spatial and temporal resolution.

Details

Language :
English
ISSN :
26404567
Volume :
3
Issue :
2
Database :
OpenAIRE
Journal :
Advanced Intelligent Systems
Accession number :
edsair.doi.dedup.....860951d25a15dfea2a390f6a2b50c628