13 results on '"Crossmodal integration"'
Search Results
2. Children With Reading Difficulty Rely on Unimodal Neural Processing for Phonemic Awareness
- Author
-
Chris McNorgan, James R. Booth, Emma B. Greenspon, and Melissa Randazzo
- Subjects
media_common.quotation_subject ,crossmodal integration ,reading difficulty ,phonemic awareness ,050105 experimental psychology ,lcsh:RC321-571 ,03 medical and health sciences ,Behavioral Neuroscience ,0302 clinical medicine ,Phonological awareness ,fMRI—functional magnetic resonance imaging ,Reading (process) ,dyslexia ,medicine ,audiovisual integration ,0501 psychology and cognitive sciences ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Biological Psychiatry ,Original Research ,media_common ,Modality (human–computer interaction) ,Phonemic awareness ,medicine.diagnostic_test ,Crossmodal ,Rhyme ,05 social sciences ,Dyslexia ,Correction ,Human Neuroscience ,medicine.disease ,Psychiatry and Mental health ,Neuropsychology and Physiological Psychology ,Neurology ,Functional magnetic resonance imaging ,Psychology ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
Phonological awareness skills in children with reading difficulty (RD) may reflect impaired automatic integration of orthographic and phonological representations. However, little is known about the underlying neural mechanisms involved in phonological awareness for children with RD. Eighteen children with RD, ages 9–13, participated in a functional magnetic resonance imaging (fMRI) study designed to assess the relationship of two constructs of phonological awareness, phoneme synthesis, and phoneme analysis, with crossmodal rhyme judgment. Participants completed a rhyme judgment task presented in two modality conditions; unimodal auditory only and crossmodal audiovisual. Measures of phonological awareness were correlated with unimodal, but not crossmodal, lexical processing. Moreover, these relationships were found only in unisensory brain regions, and not in multisensory brain areas. The results of this study suggest that children with RD rely on unimodal representations and unisensory brain areas, and provide insight into the role of phonemic awareness in mapping between auditory and visual modalities during literacy acquisition.
- Published
- 2019
- Full Text
- View/download PDF
3. Selective visual and crossmodal impairment in the discrimination of anger and fear expressions in severe alcohol use disorder
- Author
-
Federica Falagiarda, Philippe de Timary, Pierre Maurage, Fabien D'Hondt, Coralie Creupelandt, Olivier Collignon, UCL - SSH/IPSY - Psychological Sciences Research Institute, UCL - SSS/IONS - Institute of NeuroScience, UCL - SSS/IONS/NEUR - Clinical Neuroscience, and UCL - (SLuc) Service de psychiatrie adulte
- Subjects
Emotion ,Pharmacology ,medicine.medical_specialty ,Visual perception ,Crossmodal ,Ecological validity ,media_common.quotation_subject ,Audiology ,Anger ,Toxicology ,Disgust ,Sadness ,03 medical and health sciences ,Psychiatry and Mental health ,0302 clinical medicine ,Stimulus modality ,medicine ,Pharmacology (medical) ,Emotional expression ,030212 general & internal medicine ,Psychology ,Gating ,Crossmodal Integration ,030217 neurology & neurosurgery ,media_common - Abstract
Background Severe alcohol use disorder (SAUD) is associated with impaired discrimination of emotional expressions. This deficit appears increased in crossmodal settings, when simultaneous inputs from different sensory modalities are presented. However, so far, studies exploring emotional crossmodal processing in SAUD relied on static faces and unmatched face/voice pairs, thus offering limited ecological validity. Our aim was therefore to assess emotional processing using a validated and ecological paradigm relying on dynamic audio-visual stimuli, manipulating the amount of emotional information available. Method Thirty individuals with SAUD and 30 matched healthy controls performed an emotional discrimination task requiring to identify five emotions (anger, disgust, fear, happiness, sadness) expressed as visual, auditory, or auditory-visual segments of varying length. Sensitivity indices (d’) were computed to get an unbiased measure of emotional discrimination and entered in a Generalized Linear Mixed Model. Incorrect emotional attributions were also scrutinized through confusion matrices. Results Discrimination levels varied across sensory modalities and emotions, and increased with stimuli duration. Crucially, performances also improved from unimodal to crossmodal conditions in both groups, but discrimination for anger crossmodal stimuli and fear crossmodal/visual stimuli remained selectively impaired in SAUD. These deficits were not influenced by stimuli duration, suggesting that they were not modulated by the amount of emotional information available. Moreover, they were not associated with systematic emotional error patterns reflecting specific confusions between emotions. Conclusions These results clarify the nature and extent of crossmodal impairments in SAUD and converge with earlier findings to ascribe a specific role for anger and fear in this pathology.
- Published
- 2020
- Full Text
- View/download PDF
4. Written sentence context effects on acoustic-phonetic perception: fMRI reveals cross-modal semantic-perceptual interactions
- Author
-
Yuli Zhu, Domenic Minicucci, Sara Guediche, and Sheila E. Blumstein
- Subjects
Adult ,Male ,Linguistics and Language ,Speech perception ,Cognitive Neuroscience ,media_common.quotation_subject ,Functional magnetic resonance imaging ,Experimental and Cognitive Psychology ,050105 experimental psychology ,Language and Linguistics ,Speech Acoustics ,03 medical and health sciences ,Speech and Hearing ,Semantic context ,0302 clinical medicine ,Phonetics ,Perception ,Humans ,0501 psychology and cognitive sciences ,Levels-of-processing effect ,media_common ,Crossmodal integration ,Brain Mapping ,Context effect ,05 social sciences ,Amodal perception ,Ambiguity ,Magnetic Resonance Imaging ,Temporal Lobe ,Semantics ,Comprehension ,Reading ,Female ,Psychology ,030217 neurology & neurosurgery ,Sentence ,Cognitive psychology - Abstract
Available online 3 October 2019. This study examines cross-modality effects of a semantically-biased written sentence context on the perception of an acoustically-ambiguous word target identifying neural areas sensitive to interactions between sentential bias and phonetic ambiguity. Of interest is whether the locus or nature of the interactions resembles those previously demonstrated for auditory-only effects. FMRI results show significant interaction effects in right mid-middle temporal gyrus (RmMTG) and bilateral anterior superior temporal gyri (aSTG), regions along the ventral language comprehension stream that map sound onto meaning. These regions are more anterior than those previously identified for auditory-only effects; however, the same cross-over interaction pattern emerged implying similar underlying computations at play. The findings suggest that the mechanisms that integrate information across modality and across sentence and phonetic levels of processing recruit amodal areas where reading and spoken lexical and semantic access converge. Taken together, results support interactive accounts of speech and language processing. This work was supported in part by the National Institutes of Health, NIDCD grant RO1 DC006220.
- Published
- 2019
5. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias
- Author
-
Sara Invitto, Daniele Turchi, Vitoantonio Bevilacqua, Rosanna Scardino, Antonio Calcagnì, Irio De Feudis, Antonio Brunetti, Giulia Piraino, Marina de Tommaso, Arianna Mignozzi, Invitto, Sara, Calcagnì, Antonio, Mignozzi, Arianna, Scardino, Rosanna, Piraino, Giulia, Turchi, Daniele, De Feudis, Irio, Brunetti, Antonio, Bevilacqua, Vitoantonio, and de Tommaso, Marina
- Subjects
Auditory perception ,N170 ERP ,media_common.quotation_subject ,Cognitive Neuroscience ,emotional salience ,crossmodal integration ,050105 experimental psychology ,lcsh:RC321-571 ,03 medical and health sciences ,Behavioral Neuroscience ,0302 clinical medicine ,Stimulus modality ,Perception ,emotional biases ,0501 psychology and cognitive sciences ,Active listening ,Emotional expression ,music cognition ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,media_common ,Original Research ,Facial expression ,Crossmodal ,Music psychology ,05 social sciences ,Crossmodal integration ,Emotional biases ,Emotional salience ,Face recognition ,Music cognition ,Musical appraisal ,Neuropsychology and Physiological Psychology ,music cognition, face recognition, N170 ERP, emotional salience, crossmodal integration, emotional biases, musical appraisal ,musical appraisal ,Psychology ,030217 neurology & neurosurgery ,Cognitive psychology ,Neuroscience ,face recognition - Abstract
Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject’s musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener’s musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians’ behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener’s appraisal.
- Published
- 2017
6. Mapping nociceptive stimuli in a peripersonal frame of reference: evidence from a temporal order judgment task
- Author
-
Annick De Paepe, Charles Spence, Geert Crombez, and Valéry Legrain
- Subjects
Male ,Nociception ,Visual perception ,REGIONAL PAIN SYNDROME ,Cognitive Neuroscience ,media_common.quotation_subject ,EXOGENOUS SPATIAL ATTENTION ,Experimental and Cognitive Psychology ,SOMATOTOPIC ORGANIZATION ,Peripersonal space ,Stimulus (physiology) ,PRIOR ENTRY ,Somatosensory system ,Frame of reference ,Functional Laterality ,Behavioral Neuroscience ,Judgment ,Young Adult ,SINGLE-TRIAL FMRI ,Perception ,Physical Stimulation ,Temporal order judgment ,EVOKED-POTENTIALS ,Psychophysics ,Reaction Time ,SPACE ,Humans ,Attention ,Sensory cue ,media_common ,Crossmodal integration ,Communication ,Analysis of Variance ,TACTILE ,business.industry ,Subliminal stimuli ,Hand ,Space Perception ,Female ,BODY REPRESENTATION ,Cues ,MULTISENSORY REPRESENTATION ,Psychology ,business ,Neuroscience - Abstract
The ability to localize nociceptive stimuli on the body surface is essential for an organism to respond appropriately to potential physical threats. This ability not only requires a representation of the space of the observer׳s body, but also of the external space with respect to their body. Therefore, localizing nociceptive stimuli requires coordinating multiple senses into an integrated frame of reference. The peripersonal frame of reference allows for the coding of the position of somatosensory stimuli on the body surface and the position of stimuli occurring close to the body (e.g., visual stimuli). Intensively studied for touch, this topic has been largely ignored when it comes to nociception. Here, we investigated, using a temporal order judgment task, whether the spatial perception of nociceptive stimuli is coordinated with that of proximal visual stimuli into an integrated representation of peripersonal space. Participants judged which of two nociceptive stimuli, one presented to either hand, had been presented first. Each pair of nociceptive stimuli was preceded by lateralized visual cues presented either unilaterally or bilaterally, and either close to, or far from, the participant׳s body. The perception of nociceptive stimuli was biased in favor of the stimulus delivered on the hand adjacent to the unilateral visual cue, especially when the cue was presented near the participant׳s hand. These results therefore suggest that a peripersonal frame of reference is used to map the position of nociceptive stimuli in multisensory space. We propose that peripersonal space constitutes a kind of margin of safety around the body to alert an organism to possible threats.
- Published
- 2016
7. The Complex Interplay Between Multisensory Integration and Perceptual Awareness
- Author
-
Uta Noppeney, Claudia Lunghi, Ophelia Deroy, Charles Spence, Nathan Faivre, Máté Aller, Center for the study of the senses, University of London [London], Centre d'économie de la Sorbonne (CES), Université Paris 1 Panthéon-Sorbonne (UP1)-Centre National de la Recherche Scientifique (CNRS), Ecole Polytechnique Fédérale de Lausanne (EPFL), Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa - Università di Pisa, Crossmodal Research Laboratoty, University of Oxford [Oxford], Computational Neuroscience and Cognitive Robotics Centre, and University of Birmingham [Birmingham]
- Subjects
Cognitive Neuroscience ,media_common.quotation_subject ,Metacognition ,crossmodal integration ,Experimental and Cognitive Psychology ,perceptual awareness ,perception ,consciousness ,050105 experimental psychology ,Article ,03 medical and health sciences ,0302 clinical medicine ,Perception ,Multisensory integration ,Humans ,0501 psychology and cognitive sciences ,media_common ,metacognition ,05 social sciences ,Awareness ,[SHS.ECO]Humanities and Social Sciences/Economics and Finance ,Sensory Systems ,Interdependence ,Ophthalmology ,Computer Vision and Pattern Recognition ,Consciousness ,Psychology ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
The integration of information has been considered a hallmark of human consciousness, as it requires information being globally availableviawidespread neural interactions. Yet the complex interdependencies between multisensory integration and perceptual awareness, or consciousness, remain to be defined. While perceptual awareness has traditionally been studied in a single sense, in recent years we have witnessed a surge of interest in the role of multisensory integration in perceptual awareness. Based on a recent IMRF symposium on multisensory awareness, this review discusses three key questions from conceptual, methodological and experimental perspectives: (1) What do we study when we study multisensory awareness? (2) What is the relationship between multisensory integration and perceptual awareness? (3) Which experimental approaches are most promising to characterize multisensory awareness? We hope that this review paper will provoke lively discussions, novel experiments, and conceptual considerations to advance our understanding of the multifaceted interplay between multisensory integration and consciousness.
- Published
- 2016
- Full Text
- View/download PDF
8. Crossmodal duration perception involves perceptual grouping, temporal ventriloquism, and variable internal clock rates
- Author
-
Jorrit S. Montijn, P. Christiaan Klink, and Richard J. A. van Wezel
- Subjects
Adult ,Male ,Linguistics and Language ,genetic structures ,media_common.quotation_subject ,Decision Making ,Biophysics ,Differential Threshold ,Time perception ,Experimental and Cognitive Psychology ,Context (language use) ,Scalar timing ,Language and Linguistics ,Article ,Association ,Judgment ,Young Adult ,Audiovisual ,Perception ,Psychophysics ,Humans ,Attention ,Perceptual Distortion ,Intramodal dispersion ,Pitch Perception ,media_common ,Crossmodal integration ,Communication ,Crossmodal ,business.industry ,Interval perception ,Scalar expectancy ,Sensory Systems ,Pattern Recognition, Visual ,Temporal ventriloquism ,BSS-Neurotechnology and cellular engineering ,Female ,Psychology ,business ,Functional Neurogenomics [DCN 2] ,Cognitive psychology - Abstract
Here, we investigate how audiovisual context affects perceived event duration with experiments in which observers reported which of two stimuli they perceived as longer. Target events were visual and/or auditory and could be accompanied by nontargets in the other modality. Our results demonstrate that the temporal information conveyed by irrelevant sounds is automatically used when the brain estimates visual durations but that irrelevant visual information does not affect perceived auditory duration (Experiment 1). We further show that auditory influences on subjective visual durations occur only when the temporal characteristics of the stimuli promote perceptual grouping (Experiments 1 and 2). Placed in the context of scalar expectancy theory of time perception, our third and fourth experiments have the implication that audiovisual context can lead both to changes in the rate of an internal clock and to temporal ventriloquism-like effects on perceived on- and offsets. Finally, intramodal grouping of auditory stimuli diminished any crossmodal effects, suggesting a strong preference for intramodal over crossmodal perceptual grouping (Experiment 5).
- Published
- 2011
9. Sound-Induced Flash Illusion is Resistant to Feedback Training
- Author
-
Shinsuke Shimojo, Orna Rosenthal, and Ladan Shams
- Subjects
Adult ,Male ,Auditory perception ,Visual perception ,Photic Stimulation ,media_common.quotation_subject ,Decision Making ,Clinical Neurology ,Illusion ,Neuropsychological Tests ,Feedback ,Young Adult ,Flash (photography) ,Reward ,Perception ,Reflex ,Humans ,Learning ,Radiology, Nuclear Medicine and imaging ,media_common ,Psychiatry ,Crossmodal integration ,Original Paper ,Radiological and Ultrasound Technology ,Teaching ,Neurosciences ,Brain ,Training effect ,Illusions ,Biomedicine ,Acoustic Stimulation ,Neurology ,Radiology Nuclear Medicine and imaging ,Auditory Perception ,Female ,Neurology (clinical) ,Anatomy ,Percept ,Psychology ,Cognitive psychology - Abstract
A single flash accompanied by two auditory beeps tends to be perceived as two flashes (Shams et al. Nature 408:788, 2000, Cogn Brain Res 14:147–152, 2002). This phenomenon is known as ‘sound-induced flash illusion.’ Previous neuroimaging studies have shown that this illusion is correlated with modulation of activity in early visual cortical areas (Arden et al. Vision Res 43(23):2469–2478, 2003; Bhattacharya et al. NeuroReport 13:1727–1730, 2002; Shams et al. NeuroReport 12(17):3849–3852, 2001, Neurosci Lett 378(2):76–81, 2005; Watkins et al. Neuroimage 31:1247–1256, 2006, Neuroimage 37:572–578, 2007; Mishra et al. J Neurosci 27(15):4120–4131, 2007). We examined how robust the illusion is by testing whether the frequency of the illusion can be reduced by providing feedback. We found that the sound-induced flash illusion was resistant to feedback training, except when the amount of monetary reward was made dependent on accuracy in performance. However, even in the latter case the participants reported that they still perceived illusory two flashes even though they correctly reported single flash. Moreover, the feedback training effect seemed to disappear once the participants were no longer provided with feedback suggesting a short-lived refinement of discrimination between illusory and physical double flashes rather than vanishing of the illusory percept. These findings indicate that the effect of sound on the perceptual representation of visual stimuli is strong and robust to feedback training, and provide further evidence against decision factors accounting for the sound-induced flash illusion.
- Published
- 2009
- Full Text
- View/download PDF
10. Multisensory perception as an associative learning process
- Author
-
Kevin Connolly
- Subjects
binding ,media_common.quotation_subject ,lcsh:BF1-990 ,crossmodal integration ,crossmodal interaction ,perceptual learning ,associative learning ,Multimodal interaction ,Perceptual learning ,Perception ,Psychology ,General Psychology ,feature binding ,media_common ,Cognitive science ,Crossmodal ,Event (computing) ,multisensory integration ,Multisensory integration ,Hypothesis and Theory Article ,Linguistics ,Associative learning ,Feature (linguistics) ,lcsh:Psychology ,multimodal interaction - Abstract
Suppose that you are at a live jazz show. The drummer begins a solo. You see the cymbal jolt and you hear the clang. But in addition seeing the cymbal jolt and hearing the clang, you are also aware that the jolt and the clang are part of the same event. Casey O’Callaghan (forthcoming) calls this awareness “intermodal feature binding awareness.” Psychologists have long assumed that multimodal perceptions such as this one are the result of a subpersonal feature binding mechanism (see Vatakis and Spence, 2007, Kubovy and Schutz, 2010, Pourtois et al., 2000, and Navarra et al., 2012). I present new evidence against this. I argue that there is no automatic feature binding mechanism that couples features like the jolt and the clang together. Instead, when you experience the jolt and the clang as part of the same event, this is the result of an associative learning process. The cymbal’s jolt and the clang are best understood as a single learned perceptual unit, rather than as automatically bound. I outline the specific learning process in perception called “unitization,” whereby we come to “chunk” the world into multimodal units. Unitization has never before been applied to multimodal cases. Yet I argue that this learning process can do the same work that intermodal binding would do, and that this issue has important philosophical implications. Specifically, whether we take multimodal cases to involve a binding mechanism or an associative process will have impact on philosophical issues from Molyneux’s question to the question of how active or passive we consider perception to be.
- Published
- 2014
- Full Text
- View/download PDF
11. Pre-attentive modulation of brain responses to tones in coloured-hearing synesthetes
- Author
-
Martin Meyer, Stefan Elmer, Lars Rogenmoser, Lutz Jäncke, University of Zurich, and Jäncke, Lutz
- Subjects
Male ,genetic structures ,Mismatch negativity ,2804 Cellular and Molecular Neuroscience ,Audiology ,Electroencephalography ,0302 clinical medicine ,Parietal Lobe ,Phenomenon ,Attention ,EEG ,Visual Cortex ,media_common ,Crossmodal integration ,medicine.diagnostic_test ,10093 Institute of Psychology ,General Neuroscience ,lcsh:QP351-495 ,05 social sciences ,2800 General Neuroscience ,Close relationship ,Auditory Perception ,Female ,Psychology ,Color Perception ,Synesthesia ,Research Article ,Cognitive psychology ,Adult ,medicine.medical_specialty ,media_common.quotation_subject ,Stimulus (physiology) ,Auditory cortex ,behavioral disciplines and activities ,050105 experimental psychology ,lcsh:RC321-571 ,Perceptual Disorders ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Perception ,medicine ,Humans ,0501 psychology and cognitive sciences ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,medicine.disease ,Brain Waves ,lcsh:Neurophysiology and neuropsychology ,Acoustic Stimulation ,Case-Control Studies ,150 Psychology ,030217 neurology & neurosurgery ,Coloured-hearing synesthesia - Abstract
Background Coloured-hearing (CH) synesthesia is a perceptual phenomenon in which an acoustic stimulus (the inducer) initiates a concurrent colour perception (the concurrent). Individuals with CH synesthesia "see" colours when hearing tones, words, or music; this specific phenomenon suggesting a close relationship between auditory and visual representations. To date, it is still unknown whether the perception of colours is associated with a modulation of brain functions in the inducing brain area, namely in the auditory-related cortex and associated brain areas. In addition, there is an on-going debate as to whether attention to the inducer is necessarily required for eliciting a visual concurrent, or whether the latter can emerge in a pre-attentive fashion. Results By using the EEG technique in the context of a pre-attentive mismatch negativity (MMN) paradigm, we show that the binding of tones and colours in CH synesthetes is associated with increased MMN amplitudes in response to deviant tones supposed to induce novel concurrent colour perceptions. Most notably, the increased MMN amplitudes we revealed in the CH synesthetes were associated with stronger intracerebral current densities originating from the auditory cortex, parietal cortex, and ventral visual areas. Conclusions The automatic binding of tones and colours in CH synesthetes is accompanied by an early pre-attentive process recruiting the auditory cortex, inferior and superior parietal lobules, as well as ventral occipital areas.
- Published
- 2012
- Full Text
- View/download PDF
12. Multisensory synesthetic interactions in the speeded classification of visual size
- Author
-
Alberto Gallace, Charles Spence, Gallace, A, and Spence, C
- Subjects
Adult ,Male ,vision ,Visual perception ,Speech perception ,media_common.quotation_subject ,audition ,crossmodal integration ,Experimental and Cognitive Psychology ,Frequency ,Conflict, Psychological ,Judgment ,Perception ,Reaction Time ,Humans ,Attention ,synaesthesia ,Association (psychology) ,Pitch Perception ,General Psychology ,Size Perception ,media_common ,Response priming ,Communication ,business.industry ,Multisensory ,Cognition ,Sensory Systems ,Pattern Recognition, Visual ,Speech Perception ,business ,Psychology ,Cognitive psychology - Abstract
In the present study, we attempted to demonstrate a synesthetic relationship between auditory frequency and visual size. In Experiment 1, participants performed a speeded visual size discrimination task in which they had to judge whether a variable-sized disk was bigger or smaller than a standard reference disk. A task-irrelevant sound that was either synesthetically congruent with the relative size of the disk (e.g., a low-frequency sound presented with a bigger disk) or synesthetically incongruent with it (e.g., a low-frequency sound presented with a smaller disk) was sometimes presented together with the variable disk. Reaction times were shorter in the synesthetically congruent condition than in the incongruent condition. Verbal labeling and semantic mediation interpretations of this interaction were explored in Experiment 2, in which high- and low-frequency sounds were presented in separate blocks of trials, and in Experiment 3, in which the tones were replaced by the spoken words "high" and "low." Response priming/bias explanations were ruled out in Experiment 4, in which a synesthetic congruency effect was still reported even when participants made same-versus-different discrimination responses regarding the relative sizes of the two disks. Taken together, these results provide the first empirical demonstration that the relative frequency of an irrelevant sound can influence the speed with which participants judge the size of visual stimuli when the sound varies on a trial-by-trial basis along a synesthetically compatible dimension. The possible cognitive bases for this synesthetic association are also discussed. Copyright 2006 Psychonomic Society, Inc.
- Published
- 2006
13. 'Acoustical vision' of below threshold stimuli: interaction among spatially converging audiovisual inputs
- Author
-
Francesca Frassinetti, Elisabetta Làdavas, Andrea Serino, Nadia Bolognini, Bolognini, N, Frassinetti, F, Serino, A, Làdavas, E, Nadia Bolognini, Francesca Frassinetti, Andrea Serino, and Elisabetta Làdavas
- Subjects
Adult ,Visual perception ,Time Factors ,genetic structures ,media_common.quotation_subject ,Speech recognition ,multisensory, attention, superior colliculus ,M-PSI/02 - PSICOBIOLOGIA E PSICOLOGIA FISIOLOGICA ,Sensory threshold ,Perception ,Humans ,Detection theory ,media_common ,Crossmodal integration ,Communication ,Crossmodal ,business.industry ,General Neuroscience ,Subliminal stimuli ,Multisensory integration ,Neurophysiology ,Perceptual sensitivity ,Acoustic Stimulation ,Sensory Thresholds ,Space Perception ,Visual-auditory interaction ,Psychology ,business ,Photic Stimulation ,Psychomotor Performance - Abstract
Crossmodal spatial integration between auditory and visual stimuli is a common phenomenon in space perception. The principles underlying such integration have been outlined by neurophysiological and behavioral studies in animals; this study investigated whether the integrative effects observed in animals also apply to humans. In this experiment we systematically varied the spatial disparity (0°, 16°, and 32°) and the temporal interval (0, 100, 200, 300, 400, and 500 ms) between the visual and the auditory stimuli. Normal subjects were required to detect visual stimuli presented below threshold either in unimodal visual conditions or in crossmodal audiovisual conditions. Signal detection measures were used. An enhancement of the perceptual sensitivity (d′) for luminance detection was found when the audiovisual stimuli followed a simple spatial and temporal rule, governing multisensory integration at the neuronal level. © Springer-Verlag 2004.
- Published
- 2003
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.