10 results on '"Murray, Micah M."'
Search Results
2. Contextual factors multiplex to control multisensory processes
- Author
-
Sarmiento Beatriz, Matusz Pawel J., Sanabria Daniel, and Murray Micah M.
- Subjects
Adult ,Male ,Analysis of Variance ,Brain Mapping ,Adolescent ,Brain ,Electroencephalography ,Young Adult ,Acoustic Stimulation ,Nonlinear Dynamics ,Auditory Perception ,Reaction Time ,Visual Perception ,Humans ,Attention ,Female ,Evoked Potentials ,Photic Stimulation ,Research Articles - Abstract
This study analyzed high‐density event‐related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task‐irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory‐visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross‐modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non‐linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top‐down attentional control that further modulates cross‐modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context‐based control over multisensory processing, whose influences multiplex across finer and broader time scales. Hum Brain Mapp 37:273–288, 2016. © 2015 Wiley Periodicals, Inc.
- Published
- 2016
3. Contextual Factors Multiplex to Control Multisensory Processes.
- Author
-
Sarmiento, Beatriz R., Matusz, Pawel J., Sanabria, Daniel, and Murray, Micah M.
- Abstract
This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
4. Top-down control and early multisensory processes: chicken vs. egg.
- Author
-
De Meo, Rosanna, Murray, Micah M., Clarke, Stephanie, and Matusz, Pawel J.
- Subjects
PERCEPTUAL motor learning ,BEHAVIORAL research ,AUDITORY perception ,VISUAL perception ,TRANSCRANIAL magnetic stimulation - Abstract
The article deals with role of early-latency multi-sensory interactions (eMSI) as a marker of bottom-up multi-sensory processes facilitating perception and behavior independent of top-down attentional mental processes. Topics discussed include the interaction of the auditory perception with the visual perception, the result of studies which compared attended and unattended multi-sensory stimuli, and the effect of transcranial magnetic stimulation (TMS)-driven visual cortex activity on behavior.
- Published
- 2015
- Full Text
- View/download PDF
5. More Voices Persuade: The Attentional Benefits of Voice Numerosity.
- Author
-
Chang, Hannah H., Mukherjee, Anirban, and Chattopadhyay, Amitava
- Subjects
HUMAN voice ,PERSUASION (Psychology) ,CONSUMER psychology ,NARRATION ,ATTENTION ,MACHINE learning ,VIDEOS ,MARKETING ,TEXT mining - Abstract
The authors posit that in an initial exposure to a broadcast video, hearing different voices narrate (in succession) a persuasive message encourages consumers' attention and processing of the message, thereby facilitating persuasion; this is referred to as the voice numerosity effect. Across four studies (plus validation and replication studies)—including two large-scale, real-world data sets (with more than 11,000 crowdfunding videos and over 3.6 million customer transactions, and more than 1,600 video ads) and two controlled experiments (with over 1,800 participants)—the results provide support for the hypothesized effect. The effect (1) has consequential, economic implications in a real-world marketplace, (2) is more pronounced when the message is easier to comprehend, (3) is more pronounced when consumers have the capacity to process the ad message, and (4) is mediated by the favorability of consumers' cognitive responses. The authors demonstrate the use of machine learning, text mining, and natural language processing to process and analyze unstructured (multimedia) data. Theoretical and marketing implications are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. The context-contingent nature of cross-modal activations of the visual cortex.
- Author
-
Matusz, Pawel J., Retsa, Chrysa, and Murray, Micah M.
- Subjects
- *
VISUAL cortex , *OCCIPITAL lobe , *BRAIN imaging , *NEUROSCIENCES , *BRAIN research - Abstract
Real-world environments are nearly always multisensory in nature. Processing in such situations confers perceptual advantages, but its automaticity remains poorly understood. Automaticity has been invoked to explain the activation of visual cortices by laterally-presented sounds. This has been observed even when the sounds were task-irrelevant and spatially uninformative about subsequent targets. An auditory-evoked contralateral occipital positivity (ACOP) at ~ 250 ms post-sound onset has been postulated as the event-related potential (ERP) correlate of this cross-modal effect. However, the spatial dimension of the stimuli was nevertheless relevant in virtually all prior studies where the ACOP was observed. By manipulating the implicit predictability of the location of lateralised sounds in a passive auditory paradigm, we tested the automaticity of cross-modal activations of visual cortices. 128-channel ERP data from healthy participants were analysed within an electrical neuroimaging framework. The timing, topography, and localisation resembled previous characterisations of the ACOP. However, the cross-modal activations of visual cortices by sounds were critically dependent on whether the sound location was (un)predictable. Our results are the first direct evidence that this particular cross-modal process is not (fully) automatic; instead, it is context-contingent. More generally, the present findings provide novel insights into the importance of context-related factors in controlling information processing across the senses, and call for a revision of current models of automaticity in cognitive sciences. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
7. Auditory–somatosensory multisensory interactions in humans: Dissociating detection and spatial discrimination
- Author
-
Sperdin, Holger F., Cappe, Céline, and Murray, Micah M.
- Subjects
- *
SOMATOSENSORY evoked potentials , *PERCEPTUAL motor learning , *DISCRIMINATION learning , *REACTION time , *SPATIAL behavior , *SELECTIVITY (Psychology) , *ATTENTION - Abstract
Abstract: Simple reaction times (RTs) to auditory–somatosensory (AS) multisensory stimuli are facilitated over their unisensory counterparts both when stimuli are delivered to the same location and when separated. In two experiments we addressed the possibility that top-down and/or task-related influences can dynamically impact the spatial representations mediating these effects and the extent to which multisensory facilitation will be observed. Participants performed a simple detection task in response to auditory, somatosensory, or simultaneous AS stimuli that in turn were either spatially aligned or misaligned by lateralizing the stimuli. Additionally, we also informed the participants that they would be retrogradely queried (one-third of trials) regarding the side where a given stimulus in a given sensory modality was presented. In this way, we sought to have participants attending to all possible spatial locations and sensory modalities, while nonetheless having them perform a simple detection task. Experiment 1 provided no cues prior to stimulus delivery. Experiment 2 included spatially uninformative cues (50% of trials). In both experiments, multisensory conditions significantly facilitated detection RTs with no evidence for differences according to spatial alignment (though general benefits of cuing were observed in Experiment 2). Facilitated detection occurs even when attending to spatial information. Performance with probes, quantified using sensitivity (d′), was impaired following multisensory trials in general and significantly more so following misaligned multisensory trials. This indicates that spatial information is not available, despite being task-relevant. The collective results support a model wherein early AS interactions may result in a loss of spatial acuity for unisensory information. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
8. Towards understanding how we pay attention in naturalistic visual search settings
- Author
-
Nora Turoman, Ruxandra I. Tivadar, Pawel J. Matusz, Chrysa Retsa, and Micah M. Murray
- Subjects
Adult ,Male ,Temporal predictability ,Cognitive Neuroscience ,Context (language use) ,Neurosciences. Biological psychiatry. Neuropsychiatry ,Electroencephalography ,Stimulus (physiology) ,000 Computer science, knowledge & systems ,Young Adult ,510 Mathematics ,Reaction Time ,Selection (linguistics) ,medicine ,Humans ,Attention ,Predictability ,Evoked Potentials ,Attentional control ,Visual search ,Motivation ,medicine.diagnostic_test ,Multisensory ,Context ,Neurology ,Real-world ,Semantic congruence ,Time Perception ,Visual Perception ,Female ,Cues ,Psychology ,N2pc ,Meaning (linguistics) ,Cognitive psychology ,RC321-571 - Abstract
Research on attentional control has largely focused on single senses and the importance of behavioural goals in controlling attention. However, everyday situations are multisensory and contain regularities, both likely influencing attention. We investigated how visual attentional capture is simultaneously impacted by top-down goals, the multisensory nature of stimuli, and the contextual factors of stimuli’s semantic relationship and temporal predictability. Participants performed a multisensory version of the Folk et al. (1992) spatial cueing paradigm, searching for a target of a predefined colour (e.g. a red bar) within an array preceded by a distractor. We manipulated: 1) stimuli’s goal-relevance via distractor’s colour (matching vs. mismatching the target), 2) stimuli’s multisensory nature (colour distractors appearing alone vs. with tones), 3) the relationship between the distractor sound and colour (arbitrary vs. semantically congruent) and 4) the temporal predictability of distractor onset. Reaction-time spatial cueing served as a behavioural measure of attentional selection. We also recorded 129-channel event-related potentials (ERPs), analysing the distractor-elicited N2pc component both canonically and using a multivariate electrical neuroimaging framework. Behaviourally, arbitrary target-matching distractors captured attention more strongly than semantically congruent ones, with no evidence for context modulating multisensory enhancements of capture. Notably, electrical neuroimaging of surface-level EEG analyses revealed context-based influences on attention to both visual and multisensory distractors, in how strongly they activated the brain and type of activated brain networks. For both processes, the context-driven brain response modulations occurred long before the N2pc time-window, with topographic (network-based) modulations at ~30ms, followed by strength-based modulations at ~100ms post-distractor onset. Our results reveal that both stimulus meaning and predictability modulate attentional selection, and they interact while doing so. Meaning, in addition to temporal predictability, is thus a second source of contextual information facilitating goal-directed behaviour. More broadly, in everyday situations, attention is controlled by an interplay between one’s goals, stimuli’s perceptual salience, meaning and predictability. Our study calls for a revision of attentional control theories to account for the role of contextual and multisensory control.
- Published
- 2021
- Full Text
- View/download PDF
9. Selective attention to sound features mediates cross-modal activation of visual cortices.
- Author
-
Retsa, Chrysa, Matusz, Pawel J., Schnupp, Jan W.H., and Murray, Micah M.
- Subjects
- *
VISUAL cortex , *SELECTIVITY (Psychology) , *AUDITORY evoked response - Abstract
Contemporary schemas of brain organization now include multisensory processes both in low-level cortices as well as at early stages of stimulus processing. Evidence has also accumulated showing that unisensory stimulus processing can result in cross-modal effects. For example, task-irrelevant and lateralised sounds can activate visual cortices; a phenomenon referred to as the auditory-evoked contralateral occipital positivity (ACOP). Some claim this is an example of automatic attentional capture in visual cortices. Other results, however, indicate that context may play a determinant role. Here, we investigated whether selective attention to spatial features of sounds is a determining factor in eliciting the ACOP. We recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to four possible stimulus attributes: location, pitch, speaker identity or syllable. Sound acoustics were held constant, and their location was always equiprobable (50% left, 50% right). The only manipulation was to which sound dimension participants attended. We analysed the AEP data from healthy participants within an electrical neuroimaging framework. The presence of sound-elicited activations of visual cortices depended on the to-be-discriminated, goal-based dimension. The ACOP was elicited only when participants were required to discriminate sound location, but not when they attended to any of the non-spatial features. These results provide a further indication that the ACOP is not automatic. Moreover, our findings showcase the interplay between task-relevance and spatial (un)predictability in determining the presence of the cross-modal activation of visual cortices. • Crossmodal facilitation of information processing is well documented. • Lateralised sounds activate contralateral visual cortices – the ACOP. • Selective attention to spatial features of sounds is a determinant of the ACOP. • ACOP followed from ipsilateral suppression. • The ACOP depends on both stimulus regularities and task-relevance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. The Contributions of Sensory Dominance and Attentional Bias to Cross-modal Enhancement of Visual Cortex Excitability
- Author
-
Céline Cappe, Gregor Thut, Micah M. Murray, Vincenzo Romei, Centre for Brain Science, University of Essex, The Functional Electrical Neuroimaging Laboratory, Université de Lausanne (UNIL), Laboratory of Psychophysics (LPSY), Ecole Polytechnique Fédérale de Lausanne (EPFL), Centre de recherche cerveau et cognition (CERCO), Institut des sciences du cerveau de Toulouse. (ISCT), Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-CHU Toulouse [Toulouse]-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS)-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-CHU Toulouse [Toulouse]-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS)-Centre National de la Recherche Scientifique (CNRS), Department of Psychology, University of Glasgow, Romei, Vincenzo, Murray, Micah M., Cappe, Cã©line, and Thut, Gregor
- Subjects
Adult ,Male ,Linguistics and Language ,Signal Detection, Psychological ,Phosphene ,genetic structures ,Cognitive Neuroscience ,media_common.quotation_subject ,medicine.medical_treatment ,Phosphenes ,Sensory system ,Attentional bias ,Functional Laterality ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,Stimulus modality ,Bias ,Looming ,Bia ,Perception ,otorhinolaryngologic diseases ,medicine ,Humans ,Attention ,Language and Linguistic ,Visual Cortex ,030304 developmental biology ,media_common ,Analysis of Variance ,0303 health sciences ,Transcranial Magnetic Stimulation ,Transcranial magnetic stimulation ,Visual cortex ,medicine.anatomical_structure ,Acoustic Stimulation ,Auditory Perception ,Visual Perception ,[SDV.NEU]Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC] ,Female ,Psychology ,Neuroscience ,Photic Stimulation ,030217 neurology & neurosurgery ,Human ,Cognitive psychology - Abstract
Approaching or looming sounds (L-sounds) have been shown to selectively increase visual cortex excitability [Romei, V., Murray, M. M., Cappe, C., & Thut, G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology, 19, 1799–1805, 2009]. These cross-modal effects start at an early, preperceptual stage of sound processing and persist with increasing sound duration. Here, we identified individual factors contributing to cross-modal effects on visual cortex excitability and studied the persistence of effects after sound offset. To this end, we probed the impact of different L-sound velocities on phosphene perception postsound as a function of individual auditory versus visual preference/dominance using single-pulse TMS over the occipital pole. We found that the boosting of phosphene perception by L-sounds continued for several tens of milliseconds after the end of the L-sound and was temporally sensitive to different L-sound profiles (velocities). In addition, we found that this depended on an individual's preferred sensory modality (auditory vs. visual) as determined through a divided attention task (attentional preference), but not on their simple threshold detection level per sensory modality. Whereas individuals with “visual preference” showed enhanced phosphene perception irrespective of L-sound velocity, those with “auditory preference” showed differential peaks in phosphene perception whose delays after sound-offset followed the different L-sound velocity profiles. These novel findings suggest that looming signals modulate visual cortex excitability beyond sound duration possibly to support prompt identification and reaction to potentially dangerous approaching objects. The observed interindividual differences favor the idea that unlike early effects this late L-sound impact on visual cortex excitability is influenced by cross-modal attentional mechanisms rather than low-level sensory processes.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.