Back to Search Start Over

Time course of audio–visual phoneme identification: A cross-modal gating study

Authors :
Nara Ikumi
Sonia Kandel
Christophe Savariaux
Carolina Sánchez-García
Salvador Soto-Faraco
Source :
Seeing and Perceiving. 25:194
Publication Year :
2012
Publisher :
Brill, 2012.

Abstract

When both present, visual and auditory information are combined in order to decode the speech signal. Past research has addressed to what extent visual information contributes to distinguish confusable speech sounds, but usually ignoring the continuous nature of speech perception. Here we tap at the temporal course of the contribution of visual and auditory information during the process of speech perception. To this end, we designed an audio–visual gating task with videos recorded with high speed camera. Participants were asked to identify gradually longer fragments of pseudowords varying in the central consonant. Different Spanish consonant phonemes with different degree of visual and acoustic saliency were included, and tested on visual-only, auditory-only and audio–visual trials. The data showed different patterns of contribution of unimodal and bimodal information during identification, depending on the visual saliency of the presented phonemes. In particular, for phonemes which are clearly more salient in one modality than the other, audio–visual performance equals that of the best unimodal. In phonemes with more balanced saliency, audio–visual performance was better than both unimodal conditions. These results shed new light on the temporal course of audio–visual speech integration.

Details

ISSN :
18784763
Volume :
25
Database :
OpenAIRE
Journal :
Seeing and Perceiving
Accession number :
edsair.doi...........2ce0332a11a45a248e20463f0d46d9a0
Full Text :
https://doi.org/10.1163/187847612x648233