Back to Search Start Over

Crossmodal attentional control sets between vision and audition

Authors :
Charles Spence
Christian Frings
Frank Mast
Publication Year :
2017
Publisher :
Elsevier, 2017.

Abstract

The interplay between top-down and bottom-up factors in attentional selection has been a topic of extensive research and controversy amongst scientists over the past two decades. According to the influential contingent capture hypothesis, a visual stimulus needs to match the feature(s) implemented into the current attentional control sets in order to be automatically selected. Recently, however, evidence has been presented that attentional control sets affect not only visual but also crossmodal selection. The aim of the present study was therefore to establish contingent capture as a general principle of multisensory selection. A non-spatial interference task with bimodal (visual and auditory) distractors and bimodal targets was used. The target and the distractors were presented in close temporal succession. In order to perform the task correctly, the participants only had to process a predefined target feature in either of the two modalities (e.g., colour when vision was the primary modality). Note that the additional crossmodal stimulation (e.g., a specific sound when hearing was the secondary modality) was not relevant for the selection of the correct response. Nevertheless, larger interference effects were observed when the distractor matched both the stimulus of the primary as well as the secondary modality and this pattern was even stronger if vision was the primary modality than if audition was the primary modality. These results are therefore in line with the crossmodal contingent capture hypothesis. Both visual and auditory early processing seem to be affected by top-down control sets even beyond the spatial dimension.

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....57204df7bd92f136abc85978a073071b