27 results on '"Petro LS"'
Search Results
2. Cellular psychology: relating cognition to context-sensitive pyramidal cells.
- Author
-
Phillips WA, Bachmann T, Spratling MW, Muckli L, Petro LS, and Zolnik T
- Subjects
- Humans, Animals, Dendrites physiology, Pyramidal Cells physiology, Cognition physiology
- Abstract
'Cellular psychology' is a new field of inquiry that studies dendritic mechanisms for adapting mental events to the current context, thus increasing their coherence, flexibility, effectiveness, and comprehensibility. Apical dendrites of neocortical pyramidal cells have a crucial role in cognition - those dendrites receive input from diverse sources, including feedback, and can amplify the cell's feedforward transmission if relevant in that context. Specialized subsets of inhibitory interneurons regulate this cooperative context-sensitive processing by increasing or decreasing amplification. Apical input has different effects on cellular output depending on whether we are awake, deeply asleep, or dreaming. Furthermore, wakeful thought and imagery may depend on apical input. High-resolution neuroimaging in humans supports and complements evidence on these cellular mechanisms from other mammals., Competing Interests: Declaration of interests No interests are declared., (Crown Copyright © 2024. Published by Elsevier Ltd. All rights reserved.)
- Published
- 2025
- Full Text
- View/download PDF
3. Retinotopic biases in contextual feedback signals to V1 for object and scene processing.
- Author
-
Bennett MA, Petro LS, Abbatecola C, and Muckli LF
- Abstract
Identifying the objects embedded in natural scenes relies on recurrent processing between lower and higher visual areas. How is cortical feedback information related to objects and scenes organised in lower visual areas? The spatial organisation of cortical feedback converging in early visual cortex during object and scene processing could be retinotopically specific as it is coded in V1, or object centred as coded in higher areas, or both. Here, we characterise object and scene-related feedback information to V1. Participants identified foreground objects or background scenes in images with occluded central and peripheral subsections, allowing us to isolate feedback activity to foveal and peripheral regions of V1. Using fMRI and multivoxel pattern classification, we found that background scene information is projected to both foveal and peripheral V1 but can be disrupted in the fovea by a sufficiently demanding object discrimination task, during which we found evidence of foveal object decoding when using naturalistic stimuli. We suggest that the feedback connections during scene perception project back to earlier visual areas an automatic sketch of occluded information to the predicted retinotopic location. In the case of a cognitive task however, feedback pathways project content to foveal retinotopic space, potentially for introspection, functioning as a cognitive active blackboard and not necessarily predicting the object's location. This feedback architecture could reflect the internal mapping in V1 of the brain's endogenous models of the visual environment that are used to predict perceptual inputs., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Crown Copyright © 2024 Published by Elsevier B.V.)
- Published
- 2024
- Full Text
- View/download PDF
4. Decoding sound content in the early visual cortex of aphantasic participants.
- Author
-
Montabes de la Cruz BM, Abbatecola C, Luciani RS, Paton AT, Bergmann J, Vetter P, Petro LS, and Muckli LF
- Subjects
- Humans, Male, Female, Middle Aged, Adult, Aged, Aphasia physiopathology, Blindness physiopathology, Acoustic Stimulation, Visual Cortex physiology, Visual Cortex physiopathology, Auditory Perception physiology
- Abstract
Listening to natural auditory scenes leads to distinct neuronal activity patterns in the early visual cortex (EVC) of blindfolded sighted and congenitally blind participants.
1 , 2 This pattern of sound decoding is organized by eccentricity, with the accuracy of auditory information increasing from foveal to far peripheral retinotopic regions in the EVC (V1, V2, and V3). This functional organization by eccentricity is predicted by primate anatomical connectivity,3 , 4 where cortical feedback projections from auditory and other non-visual areas preferentially target the periphery of early visual areas. In congenitally blind participants, top-down feedback projections to the visual cortex proliferate,5 which might account for even higher sound-decoding accuracy in the EVC compared with blindfolded sighted participants.2 In contrast, studies in participants with aphantasia suggest an impairment of feedback projections to early visual areas, leading to a loss of visual imagery experience.6 , 7 This raises the question of whether impaired visual feedback pathways in aphantasia also reduce the transmission of auditory information to early visual areas. We presented auditory scenes to 23 blindfolded aphantasic participants. We found overall decreased sound decoding in early visual areas compared to blindfolded sighted ("control") and blind participants. We further explored this difference by modeling eccentricity effects across the blindfolded control, blind, and aphantasia datasets, and with a whole-brain searchlight analysis. Our findings suggest that the feedback of auditory content to the EVC is reduced in aphantasic participants. Reduced top-down projections might lead to both less sound decoding and reduced subjective experience of visual imagery., Competing Interests: Declaration of interests The authors declare no competing interests., (Copyright © 2024 The Author(s). Published by Elsevier Inc. All rights reserved.)- Published
- 2024
- Full Text
- View/download PDF
5. Cortical depth profiles in primary visual cortex for illusory and imaginary experiences.
- Author
-
Bergmann J, Petro LS, Abbatecola C, Li MS, Morgan AT, and Muckli L
- Subjects
- Humans, Primary Visual Cortex, Photic Stimulation methods, Feedback, Magnetic Resonance Imaging, Brain Mapping, Illusions physiology, Visual Cortex physiology
- Abstract
Visual illusions and mental imagery are non-physical sensory experiences that involve cortical feedback processing in the primary visual cortex. Using laminar functional magnetic resonance imaging (fMRI) in two studies, we investigate if information about these internal experiences is visible in the activation patterns of different layers of primary visual cortex (V1). We find that imagery content is decodable mainly from deep layers of V1, whereas seemingly 'real' illusory content is decodable mainly from superficial layers. Furthermore, illusory content shares information with perceptual content, whilst imagery content does not generalise to illusory or perceptual information. Together, our results suggest that illusions and imagery, which differ immensely in their subjective experiences, also involve partially distinct early visual microcircuits. However, overlapping microcircuit recruitment might emerge based on the nuanced nature of subjective conscious experience., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
6. The representation of occluded image regions in area V1 of monkeys and humans.
- Author
-
Papale P, Wang F, Morgan AT, Chen X, Gilhuis A, Petro LS, Muckli L, Roelfsema PR, and Self MW
- Subjects
- Animals, Humans, Haplorhini, Primary Visual Cortex, Visual Fields, Photic Stimulation methods, Visual Perception physiology, Visual Cortex physiology
- Abstract
Neuronal activity in the primary visual cortex (V1) is driven by feedforward input from within the neurons' receptive fields (RFs) and modulated by contextual information in regions surrounding the RF. The effect of contextual information on spiking activity occurs rapidly and is therefore challenging to dissociate from feedforward input. To address this challenge, we recorded the spiking activity of V1 neurons in monkeys viewing either natural scenes or scenes where the information in the RF was occluded, effectively removing the feedforward input. We found that V1 neurons responded rapidly and selectively to occluded scenes. V1 responses elicited by occluded stimuli could be used to decode individual scenes and could be predicted from those elicited by non-occluded images, indicating that there is an overlap between visually driven and contextual responses. We used representational similarity analysis to show that the structure of V1 representations of occluded scenes measured with electrophysiology in monkeys correlates strongly with the representations of the same scenes in humans measured with functional magnetic resonance imaging (fMRI). Our results reveal that contextual influences rapidly alter V1 spiking activity in monkeys over distances of several degrees in the visual field, carry information about individual scenes, and resemble those in human V1. VIDEO ABSTRACT., Competing Interests: Declaration of interests P.R.R. is founder and shareholder of Phosphoenix, a company that aims to develop a visual brain prosthesis for blind people., (Copyright © 2023 Elsevier Inc. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
7. The Spatial Precision of Contextual Feedback Signals in Human V1.
- Author
-
Petro LS, Smith FW, Abbatecola C, and Muckli L
- Abstract
Neurons in the primary visual cortex (V1) receive sensory inputs that describe small, local regions of the visual scene and cortical feedback inputs from higher visual areas processing the global scene context. Investigating the spatial precision of this visual contextual modulation will contribute to our understanding of the functional role of cortical feedback inputs in perceptual computations. We used human functional magnetic resonance imaging (fMRI) to test the spatial precision of contextual feedback inputs to V1 during natural scene processing. We measured brain activity patterns in the stimulated regions of V1 and in regions that we blocked from direct feedforward input, receiving information only from non-feedforward (i.e., feedback and lateral) inputs. We measured the spatial precision of contextual feedback signals by generalising brain activity patterns across parametrically spatially displaced versions of identical images using an MVPA cross-classification approach. We found that fMRI activity patterns in cortical feedback signals predicted our scene-specific features in V1 with a precision of approximately 4 degrees. The stimulated regions of V1 carried more precise scene information than non-stimulated regions; however, these regions also contained information patterns that generalised up to 4 degrees. This result shows that contextual signals relating to the global scene are similarly fed back to V1 when feedforward inputs are either present or absent. Our results are in line with contextual feedback signals from extrastriate areas to V1, describing global scene information and contributing to perceptual computations such as the hierarchical representation of feature boundaries within natural scenes.
- Published
- 2023
- Full Text
- View/download PDF
8. Numerosity Perception in Peripheral Vision.
- Author
-
Li MS, Abbatecola C, Petro LS, and Muckli L
- Abstract
Peripheral vision has different functional priorities for mammals than foveal vision. One of its roles is to monitor the environment while central vision is focused on the current task. Becoming distracted too easily would be counterproductive in this perspective, so the brain should react to behaviourally relevant changes. Gist processing is good for this purpose, and it is therefore not surprising that evidence from both functional brain imaging and behavioural research suggests a tendency to generalize and blend information in the periphery. This may be caused by the balance of perceptual influence in the periphery between bottom-up (i.e., sensory information) and top-down (i.e., prior or contextual information) processing channels. Here, we investigated this interaction behaviourally using a peripheral numerosity discrimination task with top-down and bottom-up manipulations. Participants compared numerosity between the left and right peripheries of a screen. Each periphery was divided into a centre and a surrounding area, only one of which was a task relevant target region. Our top-down task modulation was the instruction which area to attend - centre or surround. We varied the signal strength by altering the stimuli durations i.e., the amount of information presented/processed (as a combined bottom-up and recurrent top-down feedback factor). We found that numerosity perceived in target regions was affected by contextual information in neighbouring (but irrelevant) areas. This effect appeared as soon as stimulus duration allowed the task to be reliably performed and persisted even at the longest duration (1 s). We compared the pattern of results with an ideal-observer model and found a qualitative difference in the way centre and surround areas interacted perceptually in the periphery. When participants reported on the central area, the irrelevant surround would affect the response as a weighted combination - consistent with the idea of a receptive field focused in the target area to which irrelevant surround stimulation leaks in. When participants report on surround, we can best describe the response with a model in which occasionally the attention switches from task relevant surround to task irrelevant centre - consistent with a selection model of two competing streams of information. Overall our results show that the influence of spatial context in the periphery is mandatory but task dependent., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2021 Li, Abbatecola, Petro and Muckli.)
- Published
- 2021
- Full Text
- View/download PDF
9. A self-supervised deep neural network for image completion resembles early visual cortex fMRI activity patterns for occluded scenes.
- Author
-
Svanera M, Morgan AT, Petro LS, and Muckli L
- Subjects
- Artificial Intelligence, Humans, Neural Networks, Computer, Visual Perception, Magnetic Resonance Imaging, Visual Cortex diagnostic imaging
- Abstract
The promise of artificial intelligence in understanding biological vision relies on the comparison of computational models with brain data with the goal of capturing functional principles of visual information processing. Convolutional neural networks (CNN) have successfully matched the transformations in hierarchical processing occurring along the brain's feedforward visual pathway, extending into ventral temporal cortex. However, we are still to learn if CNNs can successfully describe feedback processes in early visual cortex. Here, we investigated similarities between human early visual cortex and a CNN with encoder/decoder architecture, trained with self-supervised learning to fill occlusions and reconstruct an unseen image. Using representational similarity analysis (RSA), we compared 3T functional magnetic resonance imaging (fMRI) data from a nonstimulated patch of early visual cortex in human participants viewing partially occluded images, with the different CNN layer activations from the same images. Results show that our self-supervised image-completion network outperforms a classical object-recognition supervised network (VGG16) in terms of similarity to fMRI data. This work provides additional evidence that optimal models of the visual system might come from less feedforward architectures trained with less supervision. We also find that CNN decoder pathway activations are more similar to brain processing compared to encoder activations, suggesting an integration of mid- and low/middle-level features in early visual cortex. Challenging an artificial intelligence model to learn natural image representations via self-supervised learning and comparing them with brain data can help us to constrain our understanding of information processing, such as neuronal predictive coding.
- Published
- 2021
- Full Text
- View/download PDF
10. Neuronal codes for predictive processing in cortical layers.
- Author
-
Petro LS and Muckli L
- Subjects
- Cognition, Neurons, Neocortex
- Abstract
Predictive processing as a computational motif of the neocortex needs to be elaborated into theories of higher cognitive functions that include simulating future behavioural outcomes. We contribute to the neuroscientific perspective of predictive processing as a foundation for the proposed representational architectures of the mind.
- Published
- 2020
- Full Text
- View/download PDF
11. Multivoxel Pattern of Blood Oxygen Level Dependent Activity can be sensitive to stimulus specific fine scale responses.
- Author
-
Vizioli L, De Martino F, Petro LS, Kersten D, Ugurbil K, Yacoub E, and Muckli L
- Subjects
- Algorithms, Humans, Image Processing, Computer-Assisted methods, Magnetic Resonance Imaging methods, Support Vector Machine, Brain physiology, Brain Mapping methods, Models, Theoretical, Oxygen metabolism
- Abstract
At ultra-high field, fMRI voxels can span the sub-millimeter range, allowing the recording of blood oxygenation level dependent (BOLD) responses at the level of fundamental units of neural computation, such as cortical columns and layers. This sub-millimeter resolution, however, is only nominal in nature as a number of factors limit the spatial acuity of functional voxels. Multivoxel Pattern Analysis (MVPA) may provide a means to detect information at finer spatial scales that may otherwise not be visible at the single voxel level due to limitations in sensitivity and specificity. Here, we evaluate the spatial scale of stimuli specific BOLD responses in multivoxel patterns exploited by linear Support Vector Machine, Linear Discriminant Analysis and Naïve Bayesian classifiers across cortical depths in V1. To this end, we artificially misaligned the testing relative to the training portion of the data in increasing spatial steps, then investigated the breakdown of the classifiers' performances. A one voxel shift led to a significant decrease in decoding accuracy (p < 0.05) across all cortical depths, indicating that stimulus specific responses in a multivoxel pattern of BOLD activity exploited by multivariate decoders can be as precise as the nominal resolution of single voxels (here 0.8 mm isotropic). Our results further indicate that large draining vessels, prominently residing in proximity of the pial surface, do not, in this case, hinder the ability of MVPA to exploit fine scale patterns of BOLD signals. We argue that tailored analytical approaches can help overcoming limitations in high-resolution fMRI and permit studying the mesoscale organization of the human brain with higher sensitivities.
- Published
- 2020
- Full Text
- View/download PDF
12. Scene Representations Conveyed by Cortical Feedback to Early Visual Cortex Can Be Described by Line Drawings.
- Author
-
Morgan AT, Petro LS, and Muckli L
- Subjects
- Adult, Female, Humans, Magnetic Resonance Imaging methods, Male, Young Adult, Pattern Recognition, Visual physiology, Photic Stimulation methods, Visual Cortex diagnostic imaging, Visual Cortex physiology, Visual Perception physiology
- Abstract
Human behavior is dependent on the ability of neuronal circuits to predict the outside world. Neuronal circuits in early visual areas make these predictions based on internal models that are delivered via non-feedforward connections. Despite our extensive knowledge of the feedforward sensory features that drive cortical neurons, we have a limited grasp on the structure of the brain's internal models. Progress in neuroscience therefore depends on our ability to replicate the models that the brain creates internally. Here we record human fMRI data while presenting partially occluded visual scenes. Visual occlusion allows us to experimentally control sensory input to subregions of visual cortex while internal models continue to influence activity in these regions. Because the observed activity is dependent on internal models, but not on sensory input, we have the opportunity to map visual features conveyed by the brain's internal models. Our results show that activity related to internal models in early visual cortex are more related to scene-specific features than to categorical or depth features. We further demonstrate that behavioral line drawings provide a good description of internal model structure representing scene-specific features. These findings extend our understanding of internal models, showing that line drawings provide a window into our brains' internal models of vision. SIGNIFICANCE STATEMENT We find that fMRI activity patterns corresponding to occluded visual information in early visual cortex fill in scene-specific features. Line drawings of the missing scene information correlate with our recorded activity patterns, and thus to internal models. Despite our extensive knowledge of the sensory features that drive cortical neurons, we have a limited grasp on the structure of our brains' internal models. These results therefore constitute an advance to the field of neuroscience by extending our knowledge about the models that our brains construct to efficiently represent and predict the world. Moreover, they link a behavioral measure to these internal models, which play an active role in many components of human behavior, including visual predictions, action planning, and decision making., (Copyright © 2019 Morgan et al.)
- Published
- 2019
- Full Text
- View/download PDF
13. Cortical feedback signals generalise across different spatial frequencies of feedforward inputs.
- Author
-
Revina Y, Petro LS, and Muckli L
- Subjects
- Adolescent, Adult, Female, Humans, Image Processing, Computer-Assisted methods, Magnetic Resonance Imaging methods, Male, Photic Stimulation, Visual Pathways physiology, Young Adult, Brain Mapping methods, Feedback, Visual Cortex physiology, Visual Perception physiology
- Abstract
Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs., (Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
14. A Perspective on Cortical Layering and Layer-Spanning Neuronal Elements.
- Author
-
Larkum ME, Petro LS, Sachdev RNS, and Muckli L
- Abstract
This review article addresses the function of the layers of the cerebral cortex. We develop the perspective that cortical layering needs to be understood in terms of its functional anatomy, i.e., the terminations of synaptic inputs on distinct cellular compartments and their effect on cortical activity. The cortex is a hierarchical structure in which feed forward and feedback pathways have a layer-specific termination pattern. We take the view that the influence of synaptic inputs arriving at different cortical layers can only be understood in terms of their complex interaction with cellular biophysics and the subsequent computation that occurs at the cellular level. We use high-resolution fMRI, which can resolve activity across layers, as a case study for implementing this approach by describing how cognitive events arising from the laminar distribution of inputs can be interpreted by taking into account the properties of neurons that span different layers. This perspective is based on recent advances in measuring subcellular activity in distinct feed-forward and feedback axons and in dendrites as they span across layers.
- Published
- 2018
- Full Text
- View/download PDF
15. Forecasting Faces in the Cortex: Comment on 'High-Level Prediction Signals in a Low-Level Area of the Macaque Face-Processing Hierarchy', by Schwiedrzik and Freiwald, Neuron (2017).
- Author
-
Petro LS and Muckli L
- Subjects
- Animals, Brain, Brain Mapping, Neurons, Macaca, Pattern Recognition, Visual
- Abstract
Although theories of predictive coding in the brain abound, we lack key pieces of neuronal data to support these theories. Recently, Schwiedrzik and Freiwald found neurophysiological evidence for predictive codes throughout the face-processing hierarchy in macaque cortex. We highlight how these data enhance our knowledge of cortical information processing, and the impact of this more broadly., (Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
16. Predictive feedback to V1 dynamically updates with sensory input.
- Author
-
Edwards G, Vetter P, McGruer F, Petro LS, and Muckli L
- Subjects
- Adult, Brain Mapping methods, Female, Healthy Volunteers, Humans, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, Male, Motion, Motion Perception, Saccades, Young Adult, Neurofeedback, Photic Stimulation, Visual Cortex physiology
- Abstract
Predictive coding theories propose that the brain creates internal models of the environment to predict upcoming sensory input. Hierarchical predictive coding models of vision postulate that higher visual areas generate predictions of sensory inputs and feed them back to early visual cortex. In V1, sensory inputs that do not match the predictions lead to amplified brain activation, but does this amplification process dynamically update to new retinotopic locations with eye-movements? We investigated the effect of eye-movements in predictive feedback using functional brain imaging and eye-tracking whilst presenting an apparent motion illusion. Apparent motion induces an internal model of motion, during which sensory predictions of the illusory motion feed back to V1. We observed attenuated BOLD responses to predicted stimuli at the new post-saccadic location in V1. Therefore, pre-saccadic predictions update their retinotopic location in time for post-saccadic input, validating dynamic predictive coding theories in V1.
- Published
- 2017
- Full Text
- View/download PDF
17. The Significance of Memory in Sensory Cortex.
- Author
-
Muckli L and Petro LS
- Subjects
- Humans, Attention physiology, Memory physiology, Visual Cortex physiology
- Abstract
Early sensory cortex is typically investigated in response to sensory stimulation, masking the contribution of internal signals. Recently, van Kerkoerle and colleagues reported that attention and memory signals segregate from sensory signals within specific layers of primary visual cortex, providing insight into the role of internal signals in sensory processing., (Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2017
- Full Text
- View/download PDF
18. The laminar integration of sensory inputs with feedback signals in human cortex.
- Author
-
Petro LS and Muckli L
- Subjects
- Animals, Brain Mapping, Humans, Rodentia, Cerebral Cortex physiology, Cognition physiology, Feedback, Physiological physiology, Sensory Receptor Cells physiology
- Abstract
The cortex constitutes the largest area of the human brain. Yet we have only a basic understanding of how the cortex performs one vital function: the integration of sensory signals (carried by feedforward pathways) with internal representations (carried by feedback pathways). A multi-scale, multi-species approach is essential for understanding the site of integration, computational mechanism and functional role of this processing. To improve our knowledge we must rely on brain imaging with improved spatial and temporal resolution and paradigms which can measure internal processes in the human brain, and on the bridging of disciplines in order to characterize this processing at cellular and circuit levels. We highlight apical amplification as one potential mechanism for integrating feedforward and feedback inputs within pyramidal neurons in the rodent brain. We reflect on the challenges and progress in applying this model neuronal process to the study of human cognition. We conclude that cortical-layer specific measures in humans will be an essential contribution for better understanding the landscape of information in cortical feedback, helping to bridge the explanatory gap., (Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2017
- Full Text
- View/download PDF
19. Contextual modulation of primary visual cortex by auditory signals.
- Author
-
Petro LS, Paton AT, and Muckli L
- Subjects
- Acoustic Stimulation, Animals, Humans, Photic Stimulation, Auditory Perception, Neural Pathways, Visual Cortex physiology, Visual Perception
- Abstract
Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'., (© 2017 The Authors.)
- Published
- 2017
- Full Text
- View/download PDF
20. The brain's predictive prowess revealed in primary visual cortex.
- Author
-
Petro LS and Muckli L
- Subjects
- Humans, Visual Cortex physiology
- Published
- 2016
- Full Text
- View/download PDF
21. Contextual Feedback to Superficial Layers of V1.
- Author
-
Muckli L, De Martino F, Vizioli L, Petro LS, Smith FW, Ugurbil K, Goebel R, and Yacoub E
- Subjects
- Humans, Magnetic Resonance Imaging, Photic Stimulation, Feedback, Physiological, Visual Cortex physiology, Visual Pathways
- Abstract
Neuronal cortical circuitry comprises feedforward, lateral, and feedback projections, each of which terminates in distinct cortical layers [1-3]. In sensory systems, feedforward processing transmits signals from the external world into the cortex, whereas feedback pathways signal the brain's inference of the world [4-11]. However, the integration of feedforward, lateral, and feedback inputs within each cortical area impedes the investigation of feedback, and to date, no technique has isolated the feedback of visual scene information in distinct layers of healthy human cortex. We masked feedforward input to a region of V1 cortex and studied the remaining internal processing. Using high-resolution functional brain imaging (0.8 mm(3)) and multivoxel pattern information techniques, we demonstrate that during normal visual stimulation scene information peaks in mid-layers. Conversely, we found that contextual feedback information peaks in outer, superficial layers. Further, we found that shifting the position of the visual scene surrounding the mask parametrically modulates feedback in superficial layers of V1. Our results reveal the layered cortical organization of external versus internal visual processing streams during perception in healthy human subjects. We provide empirical support for theoretical feedback models such as predictive coding [10, 12] and coherent infomax [13] and reveal the potential of high-resolution fMRI to access internal processing in sub-millimeter human cortex., (Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
22. Contributions of cortical feedback to sensory processing in primary visual cortex.
- Author
-
Petro LS, Vizioli L, and Muckli L
- Abstract
Closing the structure-function divide is more challenging in the brain than in any other organ (Lichtman and Denk, 2011). For example, in early visual cortex, feedback projections to V1 can be quantified (e.g., Budd, 1998) but the understanding of feedback function is comparatively rudimentary (Muckli and Petro, 2013). Focusing on the function of feedback, we discuss how textbook descriptions mask the complexity of V1 responses, and how feedback and local activity reflects not only sensory processing but internal brain states.
- Published
- 2014
- Full Text
- View/download PDF
23. Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future.
- Author
-
Muckli L, Petro LS, and Smith FW
- Subjects
- Humans, Attention physiology, Brain physiology, Cognition physiology, Cognitive Science trends, Perception physiology
- Abstract
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models).
- Published
- 2013
- Full Text
- View/download PDF
24. Decoding face categories in diagnostic subregions of primary visual cortex.
- Author
-
Petro LS, Smith FW, Schyns PG, and Muckli L
- Subjects
- Adult, Brain Mapping, Eye, Female, Humans, Magnetic Resonance Imaging, Male, Models, Neurological, Mouth, Facial Expression, Form Perception, Visual Cortex physiology
- Abstract
Higher visual areas in the occipitotemporal cortex contain discrete regions for face processing, but it remains unclear if V1 is modulated by top-down influences during face discrimination, and if this is widespread throughout V1 or localized to retinotopic regions processing task-relevant facial features. Employing functional magnetic resonance imaging (fMRI), we mapped the cortical representation of two feature locations that modulate higher visual areas during categorical judgements - the eyes and mouth. Subjects were presented with happy and fearful faces, and we measured the fMRI signal of V1 regions processing the eyes and mouth whilst subjects engaged in gender and expression categorization tasks. In a univariate analysis, we used a region-of-interest-based general linear model approach to reveal changes in activation within these regions as a function of task. We then trained a linear pattern classifier to classify facial expression or gender on the basis of V1 data from 'eye' and 'mouth' regions, and from the remaining non-diagnostic V1 region. Using multivariate techniques, we show that V1 activity discriminates face categories both in local 'diagnostic' and widespread 'non-diagnostic' cortical subregions. This indicates that V1 might receive the processed outcome of complex facial feature analysis from other cortical (i.e. fusiform face area, occipital face area) or subcortical areas (amygdala)., (© 2013 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.)
- Published
- 2013
- Full Text
- View/download PDF
25. Network interactions: non-geniculate input to V1.
- Author
-
Muckli L and Petro LS
- Subjects
- Cognition physiology, Feedback, Sensory, Humans, Memory physiology, Photic Stimulation, Visual Perception, Nerve Net physiology, Visual Cortex cytology, Visual Cortex physiology, Visual Pathways physiology
- Abstract
The strongest connections to V1 are fed back from neighbouring area V2 and from a network of higher cortical areas (e.g. V3, V5, LOC, IPS and A1), transmitting the results of cognitive operations such as prediction, attention and imagination. V1 is therefore at the receiving end of a complex cortical processing cascade and not only at the entrance stage of cortical processing of retinal input. One elegant strategy to investigate this information-rich feedback to V1 is to eliminate feedforward input, that is, exploit V1's retinotopic organisation to isolate subregions receiving no direct bottom-up stimulation. We highlight the diverse mechanisms of cortical feedback, ranging from gain control to predictive coding, and conclude that V1 is involved in rich internal communication processes., (Copyright © 2013 Elsevier Ltd. All rights reserved.)
- Published
- 2013
- Full Text
- View/download PDF
26. Transmission of facial expressions of emotion co-evolved with their efficient decoding in the brain: behavioral and brain evidence.
- Author
-
Schyns PG, Petro LS, and Smith ML
- Subjects
- Electroencephalography, Female, Humans, Male, Meta-Analysis as Topic, Spatial Behavior, Time Factors, Visual Pathways physiology, Behavior, Biological Evolution, Brain physiology, Emotions, Facial Expression
- Abstract
Competent social organisms will read the social signals of their peers. In primates, the face has evolved to transmit the organism's internal emotional state. Adaptive action suggests that the brain of the receiver has co-evolved to efficiently decode expression signals. Here, we review and integrate the evidence for this hypothesis. With a computational approach, we co-examined facial expressions as signals for data transmission and the brain as receiver and decoder of these signals. First, we show in a model observer that facial expressions form a lowly correlated signal set. Second, using time-resolved EEG data, we show how the brain uses spatial frequency information impinging on the retina to decorrelate expression categories. Between 140 to 200 ms following stimulus onset, independently in the left and right hemispheres, an information processing mechanism starts locally with encoding the eye, irrespective of expression, followed by a zooming out to processing the entire face, followed by a zooming back in to diagnostic features (e.g. the opened eyes in "fear", the mouth in "happy"). A model categorizer demonstrates that at 200 ms, the left and right brain have represented enough information to predict behavioral categorization performance.
- Published
- 2009
- Full Text
- View/download PDF
27. Dynamics of visual information integration in the brain for categorizing facial expressions.
- Author
-
Schyns PG, Petro LS, and Smith ML
- Subjects
- Electroencephalography, Evoked Potentials, Female, Humans, Male, Photic Stimulation, Brain physiology, Facial Expression, Visual Perception physiology
- Abstract
A key to understanding visual cognition is to determine when, how, and with what information the human brain distinguishes between visual categories. So far, the dynamics of information processing for categorization of visual stimuli has not been elucidated. By using an ecologically important categorization task (seven expressions of emotion), we demonstrate, in three human observers, that an early brain event (the N170 Event Related Potential, occurring 170 ms after stimulus onset) integrates visual information specific to each expression, according to a pattern. Specifically, starting 50 ms prior to the ERP peak, facial information tends to be integrated from the eyes downward in the face. This integration stops, and the ERP peaks, when the information diagnostic for judging a particular expression has been integrated (e.g., the eyes in fear, the corners of the nose in disgust, or the mouth in happiness). Consequently, the duration of information integration from the eyes down determines the latency of the N170 for each expression (e.g., with "fear" being faster than "disgust," itself faster than "happy"). For the first time in visual categorization, we relate the dynamics of an important brain event to the dynamics of a precise information-processing function.
- Published
- 2007
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.