103 results on '"Andrew E. Welchman"'
Search Results
2. Adaptation to Binocular Anticorrelation Results in Increased Neural Excitability.
- Author
-
Reuben Rideaux, Elizabeth Michael, and Andrew E. Welchman
- Published
- 2020
- Full Text
- View/download PDF
3. Look but don't touch: Visual cues to surface structure drive somatosensory cortex.
- Author
-
Hua-Chun Sun, Andrew E. Welchman, Dorita H. F. Chang, and Massimiliano Di Luca
- Published
- 2016
- Full Text
- View/download PDF
4. Ultra-high field neuroimaging reveals fine-scale processing for 3D perception
- Author
-
Elisa Zamboni, Andrew E. Welchman, Valentin G. Kemper, Ke Jia, Nuno Reis Goncalves, Adrian K. T. Ng, Rainer Goebel, Zoe Kourtzi, Ng, Adrian KT [0000-0003-2820-5270], Zamboni, Elisa [0000-0001-9200-8031], Welchman, Andrew E [0000-0002-7559-3299], Kourtzi, Zoe [0000-0001-9441-7832], Apollo - University of Cambridge Repository, Vision, and RS: FPN CN 1
- Subjects
Adult ,Male ,7 TESLA ,genetic structures ,media_common.quotation_subject ,Stereoscopy ,Neuroimaging ,Article ,law.invention ,ultra-high-field fMRI ,Young Adult ,GE BOLD ,RETINOTOPIC ORGANIZATION ,law ,Perception ,medicine ,LAMINAR DIFFERENCES ,Humans ,visual cortex ,VISUAL-CORTEX ,media_common ,depth perception ,General Neuroscience ,BINOCULAR DISPARITY ,functional connectivity ,CORTICAL DEPTH ,Magnetic Resonance Imaging ,Functional imaging ,Visual cortex ,medicine.anatomical_structure ,ANALYSIS STRATEGIES ,Random dot stereogram ,FMRI ,Binocular disparity ,Female ,Depth perception ,Psychology ,Neuroscience ,Binocular vision ,Photic Stimulation ,RESPONSES - Abstract
Binocular disparity provides critical information about three-dimensional (3D) structures to support perception and action. In the past decade significant progress has been made in uncovering human brain areas engaged in the processing of binocular disparity signals. Yet, the fine-scale brain processing underlying 3D perception remains unknown. Here, we use ultra-high-field (7T) functional imaging at submillimeter resolution to examine fine-scale BOLD fMRI signals involved in 3D perception. In particular, we sought to interrogate the local circuitry involved in disparity processing by sampling fMRI responses at different positions relative to the cortical surface (i.e., across cortical depths corresponding to layers). We tested for representations related to 3D perception by presenting participants (male and female, N = 8) with stimuli that enable stable stereoscopic perception [i.e., correlated random dot stereograms (RDS)] versus those that do not (i.e., anticorrelated RDS). Using multivoxel pattern analysis (MVPA), we demonstrate cortical depth-specific representations in areas V3A and V7 as indicated by stronger pattern responses for correlated than for anticorrelated stimuli in upper rather than deeper layers. Examining informational connectivity, we find higher feedforward layer-to-layer connectivity for correlated than anticorrelated stimuli between V3A and V7. Further, we observe disparity-specific feedback from V3A to V1 and from V7 to V3A. Our findings provide evidence for the role of V3A as a key nexus for disparity processing, which is implicated in feedforward and feedback signals related to the perceptual estimation of 3D structures. SIGNIFICANCE STATEMENT Binocular vision plays a significant role in supporting our interactions with the surrounding environment. The fine-scale neural mechanisms that underlie the brain9s skill in extracting 3D structures from binocular signals are poorly understood. Here, we capitalize on recent advances in ultra-high-field functional imaging to interrogate human brain circuits involved in 3D perception at submillimeter resolution. We provide evidence for the role of area V3A as a key nexus for disparity processing, which is implicated in feedforward and feedback signals related to the perceptual estimation of 3D structures from binocular signals. These fine-scale measurements help bridge the gap between animal neurophysiology and human fMRI studies investigating cross-scale circuits, from micro circuits to global brain networks for 3D perception.
- Published
- 2021
5. Perceptual Integration for Qualitatively Different 3-D Cues in the Human Brain.
- Author
-
Dicle Dövencioglu, Hiroshi Ban, Andrew J. Schofield, and Andrew E. Welchman
- Published
- 2013
- Full Text
- View/download PDF
6. But Still It Moves: Static Image Statistics Underlie How We See Motion
- Author
-
Reuben Rideaux, Andrew E. Welchman, Rideaux, Reuben [0000-0001-8416-005X], Welchman, Andrew E [0000-0002-7559-3299], and Apollo - University of Cambridge Repository
- Subjects
Male ,Bayes ,Visual perception ,Journal Club ,neural network ,Computer science ,direction ,media_common.quotation_subject ,Motion Perception ,Illusion ,050105 experimental psychology ,Motion (physics) ,03 medical and health sciences ,0302 clinical medicine ,natural images ,Black box ,Perception ,Statistics ,Humans ,Computer Simulation ,0501 psychology and cognitive sciences ,Motion perception ,Visual Cortex ,media_common ,Artificial neural network ,Orientation (computer vision) ,General Neuroscience ,05 social sciences ,Autocorrelation ,speed ,Bayes Theorem ,Illusions ,Sensory Systems ,Ophthalmology ,Visual Perception ,Female ,Nerve Net ,Algorithms ,030217 neurology & neurosurgery - Abstract
Seeing movement promotes survival. It results from an uncertain interplay between evolution and experience, making it hard to isolate the drivers of computational architectures found in brains. Here we seek insight into motion perception using a neural network (MotionNet) trained on moving images to classify velocity. The network recapitulates key properties of motion direction and speed processing in biological brains, and we use it to derive, and test, understanding of motion (mis)perception at the computational, neural, and perceptual levels. We show that diverse motion characteristics are largely explained by the statistical structure of natural images, rather than motion per se. First, we show how neural and perceptual biases for particular motion directions can result from the orientation structure of natural images. Second, we demonstrate an interrelation between speed and direction preferences in (macaque) MT neurons that can be explained by image autocorrelation. Third, we show that natural image statistics mean that speed and image contrast are related quantities. Finally, using behavioral tests (humans, both sexes), we show that it is knowledge of the speed-contrast association that accounts for motion illusions, rather than the distribution of movements in the environment (the “slow world” prior) as premised by Bayesian accounts. Together, this provides an exposition of motion speed and direction estimation, and produces concrete predictions for future neurophysiological experiments. More broadly, we demonstrate the conceptual value of marrying artificial systems with biological characterization, moving beyond “black box” reproduction of an architecture to advance understanding of complex systems, such as the brain.SIGNIFICANCE STATEMENTUsing an artificial systems approach, we show that physiological properties of motion can result from natural image structure. In particular, we show that the anisotropic distribution of orientations in natural statistics is sufficient to explain the cardinal bias for motion direction. We show that inherent autocorrelation in natural images means that speed and direction are related quantities, which could shape the relationship between speed and direction tuning of MT neurons. Finally, we show that movement speed and image contrast are related in moving natural images, and that motion misperception can be explained by this speed-contrast association not a “slow world” prior.
- Published
- 2020
7. Adaptation to Binocular Anticorrelation Results in Increased Neural Excitability
- Author
-
Andrew E. Welchman, Elizabeth Michael, Reuben Rideaux, Welchman, Andrew [0000-0002-7559-3299], and Apollo - University of Cambridge Repository
- Subjects
Adult ,Male ,Vision Disparity ,genetic structures ,media_common.quotation_subject ,Cognitive Neuroscience ,Sensory system ,Adaptation (eye) ,Electroencephalography ,Article ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,Perception ,medicine ,Humans ,Eye-Tracking Technology ,Binocular neurons ,media_common ,030304 developmental biology ,Neurons ,Depth Perception ,0303 health sciences ,medicine.diagnostic_test ,Neurophysiology ,Adaptation, Physiological ,Stereopsis ,Order (biology) ,Pattern Recognition, Visual ,Evoked Potentials, Visual ,Female ,Psychology ,Neuroscience ,030217 neurology & neurosurgery - Abstract
Throughout the brain, information from individual sources converges onto higher order neurons. For example, information from the two eyes first converges in binocular neurons in area V1. Some neurons appear tuned to similarities between sources of information, which makes intuitive sense in a system striving to match multiple sensory signals to a single external cause, i.e., establish causal inference. However, there are also neurons that are tuned to dissimilar information. In particular, some binocular neurons respond maximally to a dark feature in one eye and a light feature in the other. Despite compelling neurophysiological and behavioural evidence supporting the existence of these neurons (Cumming & Parker, 1997; Janssen, Vogels, Liu, & Orban, 2003; Katyal, Vergeer, He, He, & Engel, 2018; Kingdom, Jennings, & Georgeson, 2018; Tsao, Conway, & Livingstone, 2003), their function has remained opaque. To determine how neural mechanisms tuned to dissimilarities support perception, here we use electroencephalography to measure human observers’ steady-state visually evoked potentials (SSVEPs) in response to change in depth after prolonged viewing of anticorrelated and correlated random-dot stereograms (RDS). We find that adaptation to anticorrelated RDS results in larger SSVEPs, while adaptation to correlated RDS has no effect. These results are consistent with recent theoretical work suggesting ‘what not’ neurons play a suppressive role in supporting stereopsis (Goncalves & Welchman, 2017); that is, selective adaptation of neurons tuned to binocular mismatches reduces suppression resulting in increased neural excitability., This work was supported by the Leverhulme Trust (ECF-2017-573 to R. R.), the Isaac Newton Trust (17.08(o) to R. R.), and the Wellcome Trust (095183/Z/10/Z to A. E. W. and 206495/Z/17/Z to E. M.).
- Published
- 2020
8. Empowering 8 Billion Minds: Enabling Better Mental Health for All via the Ethical Adoption of Technologies
- Author
-
Husseini K. Manji, Vanessa Candeias, Nitish V. Thakor, Andrew E. Welchman, Simon Tottman, I-han Chou, Helen Herrman, Sir Philip Campbell, Shekhar Saxena, Kim Hei-Man Chow, Barbara Harvey, P. Murali Doraiswamy, Bjarte Reve, Caroline Montojo, Tan Le, Peter Varnum, Karen S. Rommelfanger, Mohammad Abdul Aziz Sultan Al Olama, Alvaro Fernández Ibáñez, Sung-Jin Jeong, Charlotte Stix, and Elisha London
- Subjects
business.industry ,Coverage and Access ,Public relations ,business ,Psychology ,Mental health - Published
- 2021
9. How multisensory neurons solve causal inference
- Author
-
Katherine R. Storrs, Reuben Rideaux, Guido Maiello, and Andrew E. Welchman
- Subjects
Vestibular system ,0303 health sciences ,Multidisciplinary ,Artificial neural network ,Computer science ,Multisensory integration ,Neurophysiology ,Medial superior temporal area ,Motion (physics) ,03 medical and health sciences ,0302 clinical medicine ,Motion estimation ,Causal inference ,Neuroscience ,030217 neurology & neurosurgery ,030304 developmental biology - Abstract
Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction ("congruent" neurons), while others prefer opposing directions ("opposite" neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.
- Published
- 2021
10. Learning predictive structure without a teacher: decision strategies and brain routes
- Author
-
Andrew E. Welchman, Zoe Kourtzi, Kourtzi, Zoe [0000-0001-9441-7832], Welchman, Andrew [0000-0002-7559-3299], and Apollo - University of Cambridge Repository
- Subjects
0301 basic medicine ,Structure (mathematical logic) ,Cognitive science ,Computer science ,General Neuroscience ,Decision Making ,Brain ,Focus (linguistics) ,03 medical and health sciences ,Range (mathematics) ,030104 developmental biology ,0302 clinical medicine ,Reward ,Adaptive behaviour ,Premise ,Humans ,Learning ,Architecture ,Neuroscience ,Structure learning ,030217 neurology & neurosurgery ,Generative grammar - Abstract
Extracting the structure of complex environments is at the core of our ability to interpret the present and predict the future. This skill is important for a range of behaviours from navigating a new city to learning music and language. Classical approaches that investigate our ability to extract the principles of organisation that govern complex environments focus on reward-based learning. Yet, the human brain is shown to be expert at learning generative structure based on mere exposure and without explicit reward. Individuals are shown to adapt to-unbeknownst to them-changes in the environment's temporal statistics and predict future events. Further, we present evidence for a common brain architecture for unsupervised structure learning and reward-based learning, suggesting that the brain is built on the premise that 'learning is its own reward' to support adaptive behaviour.
- Published
- 2019
11. Perceptual memory drives learning of retinotopic biases for bistable stimuli.
- Author
-
Aidan Peter Murphy, David A Leopold, and Andrew E Welchman
- Subjects
associative learning ,Bistable ,ambiguous figures ,perceptual stabilization ,cue recruitment ,Psychology ,BF1-990 - Abstract
The visual system exploits past experience at multiple timescales to resolve perceptual ambiguity in the retinal image. For example, perception of a bistable stimulus can be biased towards one interpretation over another when preceded by a brief presentation of a disambiguated version of the stimulus (positive priming) or through intermittent presentations of the ambiguous stimulus (stabilization). Similarly, prior presentations of unambiguous stimuli can be used to explicitly train a long-lasting association between a percept and a retinal location (perceptual association). These phenonema have typically been regarded as independent processes, with short-term biases attributed to perceptual memory and longer-term biases described as associative learning. Here we tested for interactions between these two forms of experience-dependent perceptual bias and demonstrate that short-term processes strongly influence long-term outcomes. We first demonstrate that the establishment of long-term perceptual contingencies does not require explicit training by unambiguous stimuli, but can arise spontaneously during the periodic presentation of brief, ambiguous stimuli. Using rotating Necker cube stimuli, we observed enduring, retinotopically specific perceptual biases that were expressed from the outset and remained stable for up to forty minutes, consistent with the known phenomenon of perceptual stabilization. Further, bias was undiminished after a break period of five minutes, but was readily reset by interposed periods of continuous, as opposed to periodic, ambiguous presentation. Taken together, the results demonstrate that perceptual biases can arise naturally and may principally reflect the brain’s tendency to favor recent perceptual interpretation at a given retinal location. Further, they suggest that an association between retinal location and perceptual state, rather than a physical stimulus, is sufficient to generate long-term biases in perceptual organization.
- Published
- 2014
- Full Text
- View/download PDF
12. Testing for functional organization of three-dimensional surface tilt encoding within visual cortex
- Author
-
Reuben Rideaux and Andrew E. Welchman
- Subjects
Visual cortex ,medicine.anatomical_structure ,Visual perception ,Tilt (optics) ,Computer science ,Orientation (computer vision) ,medicine ,Cognitive neuroscience of visual object recognition ,Binocular disparity ,Intraparietal sulcus ,Human brain ,Neuroscience - Abstract
Visual perception of three-dimensional (3D) structure is important for object recognition, grasping, and manipulation. The 3D structure of a surface can be defined in terms of its slant and tilt. Previous work has shown that slant and tilt are represented in the posterior and ventral intraparietal sulcus of the human brain; however, it is unclear whether the representation of these features is functionally organized within this region. Here we use phase-encoded presentation of 3D planar surfaces with linear gradients defined by horizontal binocular disparity while measuring fMRI activity to test whether the representation of 3D surface tilt is functionally organized within visual cortex. We find functionally defined structures within V3A and V7. Most notably, in one participant we find that the tilt preference is unilaterally organized in a pinwheel-like structure, similar to those observed for orientation preference in V1, which encompasses most of area V3A. These findings indicate that 3D orientation is functionally organized within the human visual cortex, and the evidence suggesting the presence of a large pinwheel-like structure indicates that this type of organization may be applied canonically within the brain at multiple scales.
- Published
- 2020
13. International brain initiative: an innovative framework for coordinated global brain research efforts
- Author
-
Keiji Tanaka, Sung Jin Jeong, Amy Bernard, Stephanie D. Albin, Gang Pei, Melina E. Hale, Pedro A. Valdes-Sosa, Katrin Amunts, Yves De Koninck, Linda Lanyon, Alexandre Pouget, Jialin Zheng, Edda Thiels, Toshihisa Ohtsuka, Gary G. Wilson, Kimberly N. Scobie, James O. Deshler, Pann-Ghill Suh, Amy Adams, Tasia Asakawa, Hideyuki Okano, Khaled Chakli, Caroline Montojo, Jan G. Bjaalie, Christoph J. Ebell, Samantha L. White, Michael Häusser, Pierre J. Magistretti, Gary F. Egan, Jason Reindorp, Rafael Yuste, Shigeo Okabe, Xu Zhang, Andrew E. Welchman, Judy Illes, Linda J. Richards, Paul Sajda, Karen S. Rommelfanger, Yan Li, Pingping Li, Agnes McMahon, and Pouget, Alexandre
- Subjects
0301 basic medicine ,Biomedical Research ,Internationality ,Brain research ,Public administration ,03 medical and health sciences ,0302 clinical medicine ,Neurotechnology ,Political science ,medicine ,Humans ,ddc:610 ,Intersectoral Collaboration ,030304 developmental biology ,0303 health sciences ,Government ,General Neuroscience ,Neurosciences ,Brain ,ddc:616.8 ,Data sharing ,medicine.anatomical_structure ,030104 developmental biology ,Neuron ,Psychology ,Neuroethics ,Neuroscience ,030217 neurology & neurosurgery - Abstract
The International Brain Initiative (IBI) has been established to coordinate efforts across existing and emerging national and regional brain initiatives. This NeuroView describes how to be involved and the new opportunities for global collaboration that are emerging between scientists, scientific societies, funders, industry, government, and society.
- Published
- 2020
14. Multimodal imaging of brain connectivity reveals predictors of individual decision strategy in statistical learning
- Author
-
Peter Tino, Rui Wang, Joseph Giorgio, Vasilis M. Karlaftis, Petra E. Vértes, Yuan Shen, Zoe Kourtzi, Andrew E. Welchman, Karlaftis, Vasilis M [0000-0003-1285-1593], Kourtzi, Zoe [0000-0001-9441-7832], and Apollo - University of Cambridge Repository
- Subjects
2.3 Psychological, social and economic factors ,Social Psychology ,Biomedical ,Computer science ,1.2 Psychological and socioeconomic processes ,1.1 Normal biological development and functioning ,Experimental and Cognitive Psychology ,Bioengineering ,Basic Behavioral and Social Science ,Article ,03 medical and health sciences ,Behavioral Neuroscience ,0302 clinical medicine ,Neuroplasticity ,Behavioral and Social Science ,Prefrontal cortex ,030304 developmental biology ,Structure (mathematical logic) ,0303 health sciences ,Working memory ,Statistical learning ,FOS: Clinical medicine ,Neurosciences ,Brain Disorders ,Variable (computer science) ,Mental Health ,1701 Psychology ,Decision strategy ,Neurological ,Graph (abstract data type) ,1109 Neurosciences ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
Successful human behavior depends on the brain's ability to extract meaningful structure from information streams and make predictions about future events. Individuals can differ markedly in the decision strategies they use to learn the environment's statistics, yet we have little idea why. Here, we investigate whether the brain networks involved in learning temporal sequences without explicit reward differ depending on the decision strategy that individuals adopt. We demonstrate that individuals alter their decision strategy in response to changes in temporal statistics and engage dissociable circuits: extracting the exact sequence statistics relates to plasticity in motor cortico-striatal circuits, while selecting the most probable outcomes relates to plasticity in visual, motivational and executive cortico-striatal circuits. Combining graph metrics of functional and structural connectivity, we provide evidence that learning-dependent changes in these circuits predict individual decision strategy. Our findings propose brain plasticity mechanisms that mediate individual ability for interpreting the structure of variable environments.
- Published
- 2019
15. Areal differences in depth cue integration between monkey and human
- Author
-
Marcelo Armendariz, Hiroshi Ban, Andrew E. Welchman, Wim Vanduffel, Vanduffel, Wim [0000-0002-9399-343X], and Apollo - University of Cambridge Repository
- Subjects
0301 basic medicine ,Visual perception ,Support Vector Machine ,Eye Movements ,QH301-705.5 ,Macaque ,Ferric Compounds ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,0302 clinical medicine ,biology.animal ,medicine ,Animals ,Humans ,Biology (General) ,Sensory cue ,Visual Cortex ,Mice, Knockout ,Neurons ,General Immunology and Microbiology ,biology ,medicine.diagnostic_test ,General Neuroscience ,Eye movement ,Haplorhini ,Magnetic Resonance Imaging ,Electrophysiology ,030104 developmental biology ,Visual cortex ,medicine.anatomical_structure ,Visual Perception ,Binocular disparity ,Nanoparticles ,General Agricultural and Biological Sciences ,Functional magnetic resonance imaging ,Neuroscience ,030217 neurology & neurosurgery - Abstract
Electrophysiological evidence suggested primarily the involvement of the middle temporal (MT) area in depth cue integration in macaques, as opposed to human imaging data pinpointing area V3B/kinetic occipital area (V3B/KO). To clarify this conundrum, we decoded monkey functional MRI (fMRI) responses evoked by stimuli signaling near or far depths defined by binocular disparity, relative motion, and their combination, and we compared results with those from an identical experiment previously performed in humans. Responses in macaque area MT are more discriminable when two cues concurrently signal depth, and information provided by one cue is diagnostic of depth indicated by the other. This suggests that monkey area MT computes fusion of disparity and motion depth signals, exactly as shown for human area V3B/KO. Hence, these data reconcile previously reported discrepancies between depth processing in human and monkey by showing the involvement of the dorsal stream in depth cue integration using the same technique, despite the engagement of different regions. ispartof: PLOS Biology vol:17 issue:3 ispartof: location:United States status: published
- Published
- 2019
16. The mixed-polarity benefit of stereopsis arises in early visual cortex
- Author
-
Andrew E. Welchman, Lukas F. Schaeffner, Welchman, Andrew [0000-0002-7559-3299], and Apollo - University of Cambridge Repository
- Subjects
Adult ,Vision Disparity ,Sensory processing ,genetic structures ,Psychometrics ,media_common.quotation_subject ,medicine.medical_treatment ,050105 experimental psychology ,Article ,03 medical and health sciences ,0302 clinical medicine ,Perception ,medicine ,mixed polarity benefit ,stereo correspondence ,Contrast (vision) ,Humans ,0501 psychology and cognitive sciences ,Computer Simulation ,media_common ,Visual Cortex ,binocular vision ,Depth Perception ,binocular energy model ,05 social sciences ,Transcranial Magnetic Stimulation ,stereopsis ,Sensory Systems ,Transcranial magnetic stimulation ,Ophthalmology ,Stereopsis ,Visual cortex ,medicine.anatomical_structure ,Psychology ,Depth perception ,Neuroscience ,030217 neurology & neurosurgery - Abstract
Depth perception is better when observers view stimuli containing a mixture of bright and dark visual features. It is currently unclear where in the visual system sensory processing benefits from the availability of different contrast polarity. To address this question, we applied transcranial magnetic stimulation to the visual cortex to modulate normal neural activity during processing of single- or mixed-polarity random-dot stereograms. In line with previous work, participants gave significantly better depth judgments for mixed-polarity stimuli. Stimulation of early visual cortex (V1/V2) significantly increased this benefit for mixed-polarity stimuli, and it did not affect performance for single-polarity stimuli. Stimulation of disparity responsive areas V3a and LO had no effect on perception. Our findings show that disparity processing in early visual cortex gives rise to the mixed-polarity benefit. This is consistent with computational models of stereopsis at the level of V1 that produce a mixed polarity benefit.
- Published
- 2019
17. Shape Perception: Boundary Conditions on a Grey Area
- Author
-
Andrew E. Welchman
- Subjects
0301 basic medicine ,Depth Perception ,Light ,business.industry ,media_common.quotation_subject ,Light reflection ,Color ,Biology ,Object (computer science) ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,Light intensity ,030104 developmental biology ,0302 clinical medicine ,Perception ,Wallpaper ,Computer vision ,Artificial intelligence ,Boundary value problem ,Cues ,General Agricultural and Biological Sciences ,Depth perception ,business ,030217 neurology & neurosurgery ,media_common - Abstract
Modulations in light intensity across a visual image could be caused by a flat object with varying pigmentation, such as wallpaper, or differential light reflection from a three-dimensional shape made of uniform material, such as curtains. A new study identifies key image cues that help the brain work out which interpretation to select.
- Published
- 2019
18. Contextual effects on binocular matching are evident in primary visual cortex
- Author
-
Reuben Rideaux, Andrew E. Welchman, Welchman, Andrew [0000-0002-7559-3299], and Apollo - University of Cambridge Repository
- Subjects
Male ,Visual perception ,genetic structures ,Brain activity and meditation ,Stereoscopy ,computer.software_genre ,law.invention ,Primary visual cortex ,0302 clinical medicine ,law ,Voxel ,Visual Cortex ,media_common ,Brain Mapping ,Vision, Binocular ,0303 health sciences ,05 social sciences ,Magnetic Resonance Imaging ,Sensory Systems ,Stereopsis ,medicine.anatomical_structure ,Female ,Functional imaging ,Psychology ,Contextual effects ,Adult ,media_common.quotation_subject ,Models, Neurological ,Illusion ,Context (language use) ,Article ,050105 experimental psychology ,Young Adult ,03 medical and health sciences ,medicine ,Humans ,0501 psychology and cognitive sciences ,030304 developmental biology ,Depth Perception ,Monocular ,business.industry ,Pattern recognition ,Ophthalmology ,Visual cortex ,Artificial intelligence ,Depth perception ,business ,Binocular vision ,computer ,030217 neurology & neurosurgery - Abstract
Global context can dramatically influence local visual perception. This phenomenon is well-documented for monocular features, e.g., the Kanizsa triangle. It has been demonstrated for binocular matching: the disambiguation of the Wallpaper Illusion (Brewster, 1844) via the luminance of the background (Anderson & Nakayama, 1994). For monocular features, there is evidence that global context can influence neuronal responses as early as V1 (Muckli et al., 2015). However, for binocular matching, the activity in this area of the visual cortex is thought to represent local processing, suggesting that the influence of global context may occur at later stages of cortical processing. Here we sought to test if binocular matching is influenced by contextual effects in V1, using fMRI to measure brain activity while participants viewed perceptually ambiguous “wallpaper” stereograms whose depth was disambiguated by the luminance of the surrounding region. We localized voxels in V1 corresponding to the ambiguous region of the pattern, i.e., where the signal received from the eyes was not predictive of depth, and despite the ambiguity of the input signal, using multi-voxel pattern analysis we were able to reliably decode perceived (near/far) depth from the activity of these voxels. These findings indicate that stereoscopic related neural activity is influenced by global context as early as V1.
- Published
- 2019
19. Exploring and explaining properties of motion processing in biological brains using a neural network
- Author
-
Andrew E. Welchman and Reuben Rideaux
- Subjects
neural network ,Computer science ,media_common.quotation_subject ,Motion Perception ,Signal ,Article ,050105 experimental psychology ,Motion (physics) ,Pattern Recognition, Automated ,reverse-phi ,03 medical and health sciences ,0302 clinical medicine ,Perception ,Psychophysics ,Visual motion perception ,Animals ,Humans ,0501 psychology and cognitive sciences ,Motion perception ,Vision, Ocular ,Visual Cortex ,V1 and MT ,media_common ,Artificial neural network ,Noise (signal processing) ,business.industry ,05 social sciences ,Pattern recognition ,Neurophysiology ,Sensory Systems ,speed and direction ,Ophthalmology ,Visual Perception ,Neural Networks, Computer ,Artificial intelligence ,business ,Depth perception ,Photic Stimulation ,030217 neurology & neurosurgery ,Coherence (physics) - Abstract
Visual motion perception underpins behaviours ranging from navigation to depth perception and grasping. Our limited access to biological systems constrain our understanding of how motion is processed within the brain. Here we explore properties of motion perception in biological systems by training a neural network (‘MotionNetxy’) to estimate the velocity image sequences. The network recapitulates key characteristics of motion processing in biological brains, and we use our complete access to its structure explore and understand motion (mis)perception at the computational-, neural-, and perceptual-levels. First, we find that the network recapitulates the biological response to reverse-phi motion in terms of direction. We further find that it overestimates the speed of slow reverse-phi motion while underestimating the speed of fast reverse-phi motion because of the correlation between reverse-phi motion and the spatiotemporal receptive fields tuned to motion in opposite directions. Second, we find that the distribution of spatiotemporal tuning properties in the V1 and MT layers of the network are similar to those observed in biological systems. We then show that compared to MT units tuned to fast speeds, those tuned to slow speeds primarily receive input from V1 units tuned to high spatial frequency and low temporal frequency. Third, we find that there is a positive correlation between the pattern-motion and speed selectivity of MT units. Finally, we show that the network captures human underestimation of low coherence motion stimuli, and that this is due to pooling of noise and signal motion. These findings provide biologically plausible explanations for well-known phenomena, and produce concrete predictions for future psychophysical and neurophysiological experiments.
- Published
- 2021
20. The Human Brain in Depth: How We See in 3D
- Author
-
Andrew E. Welchman
- Subjects
0301 basic medicine ,Vision Disparity ,genetic structures ,media_common.quotation_subject ,03 medical and health sciences ,Imaging, Three-Dimensional ,0302 clinical medicine ,Perception ,Cortex (anatomy) ,Image Processing, Computer-Assisted ,medicine ,Humans ,Computer vision ,Vision, Ocular ,Visual Cortex ,media_common ,Cognitive science ,Depth Perception ,Vision, Binocular ,business.industry ,Human brain ,Magnetic Resonance Imaging ,Ophthalmology ,030104 developmental biology ,Stereopsis ,Visual cortex ,medicine.anatomical_structure ,Binocular disparity ,Neurology (clinical) ,Artificial intelligence ,Cues ,Depth perception ,business ,Psychology ,Binocular vision ,030217 neurology & neurosurgery - Abstract
Human perception is remarkably flexible: We experience vivid three-dimensional (3D) structure under diverse conditions, from the seemingly random magic-eye stereograms to the aesthetically beautiful, but obviously flat, canvases of the Old Masters. How does the brain achieve this apparently effortless robustness? Using brain imaging we are beginning to discover how different parts of the visual cortex support 3D perception by tracing different computations in the dorsal and ventral pathways. This review concentrates on studies of binocular disparity and its combination with other depth cues. This work suggests that the dorsal visual cortex is strongly engaged by 3D information and is involved in integrating signals to represent the structure of viewed surfaces. The ventral cortex may store representations of object configurations and the features required for task performance. These differences can be broadly understood in terms of the different computational demands of reducing estimator variance versus increasing the separation between exemplars.
- Published
- 2016
21. White-Matter Pathways for Statistical Learning of Temporal Structures
- Author
-
Vasilis M, Karlaftis, Rui, Wang, Yuan, Shen, Peter, Tino, Guy, Williams, Andrew E, Welchman, and Zoe, Kourtzi
- Subjects
Adult ,Male ,vision ,Decision Making ,Brain ,brain imaging ,New Research ,diffusion tensor imaging ,White Matter ,Markov Chains ,Young Adult ,statistical learning ,Neural Pathways ,8.1 ,Humans ,Learning ,Sensory and Motor Systems ,Female ,brain plasticity - Abstract
Extracting the statistics of event streams in natural environments is critical for interpreting current events and predicting future ones. The brain is known to rapidly find structure and meaning in unfamiliar streams of sensory experience, often by mere exposure to the environment (i.e., without explicit feedback). Yet, we know little about the brain pathways that support this type of statistical learning. Here, we test whether changes in white-matter (WM) connectivity due to training relate to our ability to extract temporal regularities. By combining behavioral training and diffusion tensor imaging (DTI), we demonstrate that humans adapt to the environment’s statistics as they change over time from simple repetition to probabilistic combinations. In particular, we show that learning relates to the decision strategy that individuals adopt when extracting temporal statistics. We next test for learning-dependent changes in WM connectivity and ask whether they relate to individual variability in decision strategy. Our DTI results provide evidence for dissociable WM pathways that relate to individual strategy: extracting the exact sequence statistics (i.e., matching) relates to connectivity changes between caudate and hippocampus, while selecting the most probable outcomes in a given context (i.e., maximizing) relates to connectivity changes between prefrontal, cingulate and basal ganglia (caudate, putamen) regions. Thus, our findings provide evidence for distinct cortico-striatal circuits that show learning-dependent changes of WM connectivity and support individual ability to learn behaviorally-relevant statistics.
- Published
- 2017
22. 4. The Processing and Integration of 3D Depth Signals in the Human Visual Cortex Revealed by fMRI and TMS
- Author
-
Dorita H. F. Chang, Andrew E. Welchman, and Hiroshi Ban
- Subjects
Visual cortex ,medicine.anatomical_structure ,Computer science ,Media Technology ,medicine ,Electrical and Electronic Engineering ,Neuroscience ,Computer Science Applications - Published
- 2015
23. Learning predictive statistics from temporal sequences: Dynamics and strategies
- Author
-
Rui, Wang, Yuan, Shen, Peter, Tino, Andrew E, Welchman, and Zoe, Kourtzi
- Subjects
Cerebral Cortex ,Male ,vision ,learning ,behavior ,Decision Making ,Models, Neurological ,Anticipation, Psychological ,Adaptation, Physiological ,Markov Chains ,Article ,Young Adult ,Pattern Recognition, Visual ,Humans ,Computer Simulation ,Female - Abstract
Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity, from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences, from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment's statistics—that is, they extract the behaviorally relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selection of the most probable outcome in a given context (maximizing) rather than matching of the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally relevant statistics that facilitate our ability to predict future events in variable environments.
- Published
- 2017
24. Proscription supports robust perceptual integration by suppression in human visual cortex
- Author
-
Reuben, Rideaux and Andrew E, Welchman
- Subjects
Adult ,Male ,Neurons ,Functional Neuroimaging ,Models, Neurological ,Electric Stimulation ,Article ,Young Adult ,Visual Perception ,Evoked Potentials, Visual ,Humans ,Female ,Cues ,Photic Stimulation ,gamma-Aminobutyric Acid ,Visual Cortex - Abstract
Perception relies on integrating information within and between the senses, but how does the brain decide which pieces of information should be integrated and which kept separate? Here we demonstrate how proscription can be used to solve this problem: certain neurons respond best to unrealistic combinations of features to provide ‘what not’ information that drives suppression of unlikely perceptual interpretations. First, we present a model that captures both improved perception when signals are consistent (and thus should be integrated) and robust estimation when signals are conflicting. Second, we test for signatures of proscription in the human brain. We show that concentrations of inhibitory neurotransmitter GABA in a brain region intricately involved in integrating cues (V3B/KO) correlate with robust integration. Finally, we show that perturbing excitation/inhibition impairs integration. These results highlight the role of proscription in robust perception and demonstrate the functional purpose of ‘what not’ sensors in supporting sensory estimation., Perception relies on information integration but it is unclear how the brain decides which information to integrate and which to keep separate. Here, the authors develop and test a biologically inspired model of cue-integration, implicating a key role for GABAergic proscription in robust perception.
- Published
- 2017
25. 'What Not' Detectors Help the Brain See in Depth
- Author
-
Nuno R, Goncalves, Andrew E, Welchman, Welchman, Andrew [0000-0002-7559-3299], and Apollo - University of Cambridge Repository
- Subjects
Vision, Binocular ,Vision Disparity ,Models, Neurological ,Brain ,3D vision ,convolutional neural network ,Article ,Pattern Recognition, Visual ,Visual Perception ,Humans ,binocular disparity ,wallpaper illusion ,da Vinci stereopsis ,Visual Cortex ,depth perception - Abstract
Summary Binocular stereopsis is one of the primary cues for three-dimensional (3D) vision in species ranging from insects to primates. Understanding how the brain extracts depth from two different retinal images represents a tractable challenge in sensory neuroscience that has so far evaded full explanation. Central to current thinking is the idea that the brain needs to identify matching features in the two retinal images (i.e., solving the “stereoscopic correspondence problem”) so that the depth of objects in the world can be triangulated. Although intuitive, this approach fails to account for key physiological and perceptual observations. We show that formulating the problem to identify “correct matches” is suboptimal and propose an alternative, based on optimal information encoding, that mixes disparity detection with “proscription”: exploiting dissimilar features to provide evidence against unlikely interpretations. We demonstrate the role of these “what not” responses in a neural network optimized to extract depth in natural images. The network combines information for and against the likely depth structure of the viewed scene, naturally reproducing key characteristics of both neural responses and perceptual interpretations. We capture the encoding and readout computations of the network in simple analytical form and derive a binocular likelihood model that provides a unified account of long-standing puzzles in 3D vision at the physiological and perceptual levels. We suggest that marrying detection with proscription provides an effective coding strategy for sensory estimation that may be useful for diverse feature domains (e.g., motion) and multisensory integration., Highlights • The brain uses “what not” detectors to facilitate 3D vision • Binocular mismatches are used to drive suppression of incompatible depths • Proscription accounts for depth perception without binocular correspondence • A simple analytical model captures perceptual and neural responses, Goncalves and Welchman show that long-standing puzzles for the physiology and perception of 3D vision are explained by the brain’s use of “what not” detectors. These facilitate stereopsis by providing evidence against interpretations that are incompatible with the true structure of the scene.
- Published
- 2017
26. Learning Predictive Statistics: Strategies and Brain Mechanisms
- Author
-
Rui, Wang, Yuan, Shen, Peter, Tino, Andrew E, Welchman, and Zoe, Kourtzi
- Subjects
Adult ,Cerebral Cortex ,Male ,vision ,Models, Statistical ,learning ,Behavioral/Cognitive ,Decision Making ,Models, Neurological ,fMRI ,prediction ,Anticipation, Psychological ,Adaptation, Physiological ,Corpus Striatum ,Pattern Recognition, Visual ,Neural Pathways ,Humans ,Computer Simulation ,Female ,Nerve Net ,Research Articles - Abstract
When immersed in a new environment, we are challenged to decipher initially incomprehensible streams of sensory information. However, quite rapidly, the brain finds structure and meaning in these incoming signals, helping us to predict and prepare ourselves for future actions. This skill relies on extracting the statistics of event streams in the environment that contain regularities of variable complexity from simple repetitive patterns to complex probabilistic combinations. Here, we test the brain mechanisms that mediate our ability to adapt to the environment's statistics and predict upcoming events. By combining behavioral training and multisession fMRI in human participants (male and female), we track the corticostriatal mechanisms that mediate learning of temporal sequences as they change in structure complexity. We show that learning of predictive structures relates to individual decision strategy; that is, selecting the most probable outcome in a given context (maximizing) versus matching the exact sequence statistics. These strategies engage distinct human brain regions: maximizing engages dorsolateral prefrontal, cingulate, sensory–motor regions, and basal ganglia (dorsal caudate, putamen), whereas matching engages occipitotemporal regions (including the hippocampus) and basal ganglia (ventral caudate). Our findings provide evidence for distinct corticostriatal mechanisms that facilitate our ability to extract behaviorally relevant statistics to make predictions. SIGNIFICANCE STATEMENT Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. Past work has studied how humans identify repetitive patterns and associative pairings. However, the natural environment contains regularities that vary in complexity from simple repetition to complex probabilistic combinations. Here, we combine behavior and multisession fMRI to track the brain mechanisms that mediate our ability to adapt to changes in the environment's statistics. We provide evidence for an alternate route for learning complex temporal statistics: extracting the most probable outcome in a given context is implemented by interactions between executive and motor corticostriatal mechanisms compared with visual corticostriatal circuits (including hippocampal cortex) that support learning of the exact temporal statistics.
- Published
- 2017
27. Mapping the visual brain areas susceptible to phosphene induction through brain stimulation
- Author
-
Andrew E. Welchman, Lukas F. Schaeffner, Welchman, Andrew E [0000-0002-7559-3299], and Apollo - University of Cambridge Repository
- Subjects
Adult ,Male ,Brain stimulation marker ,genetic structures ,Neuroscience(all) ,medicine.medical_treatment ,Phosphenes ,Stimulation efficacy ,Stimulation ,behavioral disciplines and activities ,050105 experimental psychology ,Functional Laterality ,03 medical and health sciences ,Neural activity ,Young Adult ,0302 clinical medicine ,medicine ,Image Processing, Computer-Assisted ,Psychophysics ,Humans ,0501 psychology and cognitive sciences ,Visual Pathways ,Neuronavigation ,Visual Cortex ,Brain Mapping ,medicine.diagnostic_test ,General Neuroscience ,05 social sciences ,Cortical excitability ,Magnetic Resonance Imaging ,Transcranial Magnetic Stimulation ,Transcranial magnetic stimulation ,Oxygen ,Phosphene ,Visual cortex ,medicine.anatomical_structure ,Logistic Models ,Brain stimulation ,Female ,Percept ,Psychology ,Functional magnetic resonance imaging ,Neuroscience ,030217 neurology & neurosurgery ,psychological phenomena and processes ,Photic Stimulation ,Research Article - Abstract
Transcranial magnetic stimulation (TMS) is a non-invasive brain stimulation technique whose effects on neural activity can be uncertain. Within the visual cortex, phosphenes are a useful marker of TMS: They indicate the induction of neural activation that propagates and creates a conscious percept. However, we currently do not know how susceptible different areas of the visual cortex are to TMS-induced phosphenes. In this study, we systematically map out locations in the visual cortex where stimulation triggered phosphenes. We relate this to the retinotopic organization and the location of object- and motion-selective areas, identified by functional magnetic resonance imaging (fMRI) measurements. Our results show that TMS can reliably induce phosphenes in early (V1, V2d, and V2v) and dorsal (V3d and V3a) visual areas close to the interhemispheric cleft. However, phosphenes are less likely in more lateral locations (hMT+/V5 and LOC). This suggests that early and dorsal visual areas are particularly amenable to TMS and that TMS can be used to probe the functional role of these areas.
- Published
- 2016
- Full Text
- View/download PDF
28. Perceptual Integration for Qualitatively Different 3-D Cues in the Human Brain
- Author
-
Andrew J. Schofield, Hiroshi Ban, Andrew E. Welchman, and Dicle N. Dövencioğlu
- Subjects
Adult ,Male ,genetic structures ,Cognitive Neuroscience ,media_common.quotation_subject ,Sensory system ,Article ,Young Adult ,Predictive Value of Tests ,Perception ,Image Processing, Computer-Assisted ,Psychophysics ,medicine ,Humans ,Association (psychology) ,Probability ,media_common ,Brain Mapping ,Depth Perception ,Communication ,business.industry ,Brain ,Pattern recognition ,Magnetic Resonance Imaging ,Oxygen ,Visual cortex ,medicine.anatomical_structure ,Binocular disparity ,Female ,Artificial intelligence ,Cues ,Kinetic depth effect ,business ,Psychology ,Depth perception ,Algorithms ,Photic Stimulation - Abstract
The visual system's flexibility in estimating depth is remarkable: We readily perceive 3-D structure under diverse conditions from the seemingly random dots of a “magic eye” stereogram to the aesthetically beautiful, but obviously flat, canvasses of the Old Masters. Yet, 3-D perception is often enhanced when different cues specify the same depth. This perceptual process is understood as Bayesian inference that improves sensory estimates. Despite considerable behavioral support for this theory, insights into the cortical circuits involved are limited. Moreover, extant work tested quantitatively similar cues, reducing some of the challenges associated with integrating computationally and qualitatively different signals. Here we address this challenge by measuring fMRI responses to depth structures defined by shading, binocular disparity, and their combination. We quantified information about depth configurations (convex “bumps” vs. concave “dimples”) in different visual cortical areas using pattern classification analysis. We found that fMRI responses in dorsal visual area V3B/KO were more discriminable when disparity and shading concurrently signaled depth, in line with the predictions of cue integration. Importantly, by relating fMRI and psychophysical tests of integration, we observed a close association between depth judgments and activity in this area. Finally, using a cross-cue transfer test, we found that fMRI responses evoked by one cue afford classification of responses evoked by the other. This reveals a generalized depth representation in dorsal visual cortex that combines qualitatively different information in line with 3-D perception.
- Published
- 2013
29. Mechanisms for Extracting a Signal from Noise as Revealed through the Specificity and Generality of Task Training
- Author
-
Dorita H. F. Chang, Andrew E. Welchman, and Zoe Kourtzi
- Subjects
Adult ,Male ,Signal Detection, Psychological ,Adolescent ,Process (engineering) ,Speech recognition ,Motion Perception ,Perceptual functions ,behavioral disciplines and activities ,Young Adult ,Orientation ,Humans ,Motion perception ,Depth Perception ,Communication ,Generality ,business.industry ,General Neuroscience ,Articles ,Task (computing) ,Feature (computer vision) ,Binocular disparity ,Female ,business ,Depth perception ,Psychology ,Photic Stimulation ,Psychomotor Performance - Abstract
Visual judgments critically depend on (1) the detection of meaningful items from cluttered backgrounds and (2) the discrimination of an item from highly similar alternatives. Learning and experience are known to facilitate these processes, but the specificity with which these processes operate is poorly understood. Here we use psychophysical measures of human participants to test learning in two types of commonly used tasks that target segmentation (signal-in-noise, or “coarse” tasks) versus the discrimination of highly similar items (feature difference, or “fine” tasks). First, we consider the processing of binocular disparity signals, examining performance on signal-in-noise and feature difference tasks after a period of training on one of these tasks. Second, we consider the generality of learning between different visual features, testing performance on both task types for displays defined by disparity, motion, or orientation. We show that training on a feature difference task also improves performance on signal-in-noise tasks, but only for the same visual feature. By contrast, training on a signal-in-noise task has limited benefits for fine judgments of the same feature but supports learning that generalizes to signal-in-noise tasks for other features. These findings indicate that commonly used signal-in-noise tasks require at least three distinct components: feature representations, signal-specific selection, and a generalized process that enhances segmentation. As such, there is clear potential to harness areas of commonality (both within and between cues) to improve impaired perceptual functions.
- Published
- 2013
30. Perceptual learning of second order cues for layer decomposition
- Author
-
Andrew J. Schofield, Andrew E. Welchman, Dicle N. Dövencioğlu, Welchman, Andrew [0000-0002-7559-3299], and Apollo - University of Cambridge Repository
- Subjects
Adult ,Male ,Visual perception ,Luminescence ,Layer-decomposition ,Perceptual-learning ,Luminance ,050105 experimental psychology ,Article ,Visual processing ,Discrimination Learning ,03 medical and health sciences ,0302 clinical medicine ,Optics ,Perceptual learning ,Humans ,0501 psychology and cognitive sciences ,Discrimination learning ,Second-order ,Lighting ,Analysis of Variance ,Depth Perception ,business.industry ,05 social sciences ,Pattern recognition ,Sensory Systems ,Ophthalmology ,Sensory Thresholds ,Visual Perception ,Female ,Spatial frequency ,Artificial intelligence ,Cues ,business ,Depth perception ,Psychology ,Transfer of learning ,030217 neurology & neurosurgery ,Photic Stimulation - Abstract
Highlights ► Second-order cues normally support layer decomposition at long but not short presentation times. ► Perceptual learning is used to train observers to perform the task at short presentation times. ► The transfer of learning is consistent with low level changes in perceptual processing., Luminance variations are ambiguous: they can signal changes in surface reflectance or changes in illumination. Layer decomposition—the process of distinguishing between reflectance and illumination changes—is supported by a range of secondary cues including colour and texture. For an illuminated corrugated, textured surface the shading pattern comprises modulations of luminance (first order, LM) and local luminance amplitude (second-order, AM). The phase relationship between these two signals enables layer decomposition, predicts the perception of reflectance and illumination changes, and has been modelled based on early, fast, feed-forward visual processing (Schofield et al., 2010). However, while inexperienced viewers appreciate this scission at long presentation times, they cannot do so for short presentation durations (250 ms). This might suggest the action of slower, higher-level mechanisms. Here we consider how training attenuates this delay, and whether the resultant learning occurs at a perceptual level. We trained observers to discriminate the components of plaid stimuli that mixed in-phase and anti-phase LM/AM signals over a period of 5 days. After training, the strength of the AM signal needed to differentiate the plaid components fell dramatically, indicating learning. We tested for transfer of learning using stimuli with different spatial frequencies, in-plane orientations, and acutely angled plaids. We report that learning transfers only partially when the stimuli are changed, suggesting that benefits accrue from tuning specific mechanisms, rather than general interpretative processes. We suggest that the mechanisms which support layer decomposition using second-order cues are relatively early, and not inherently slow.
- Published
- 2013
31. Contextual feedback to V1 neurons shapes binocular matching
- Author
-
Andrew E. Welchman and Reuben Rideaux
- Subjects
Ophthalmology ,Matching (statistics) ,business.industry ,Computer science ,Pattern recognition ,Artificial intelligence ,business ,Sensory Systems - Published
- 2018
32. Multisensory cues improve sensorimotor synchronisation
- Author
-
Mark T. Elliott, Andrew E. Welchman, and Alan M. Wing
- Subjects
Auditory perception ,Communication ,Visual perception ,Modalities ,business.industry ,Computer science ,General Neuroscience ,Speech recognition ,Multisensory integration ,Sensory system ,Time perception ,Asynchrony (computer programming) ,Stimulus modality ,business - Abstract
Synchronising movements with events in the surrounding environment is an ubiquitous aspect of everyday behaviour. Often, information about a stream of events is available across sensory modalities. While it is clear that we synchronise more accurately to auditory cues than other modalities, little is known about how the brain combines multisensory signals to produce accurately timed actions. Here, we investigate multisensory integration for sensorimotor synchronisation. We extend the prevailing linear phase correction model for movement synchronisation, describing asynchrony variance in terms of sensory, motor and timekeeper components. Then we assess multisensory cue integration, deriving predictions based on the optimal combination of event time, defined across different sensory modalities. Participants tapped in time with metronomes presented via auditory, visual and tactile modalities, under either unimodal or bimodal presentation conditions. Temporal regularity was manipulated between modalities by applying jitter to one of the metronomes. Results matched the model predictions closely for all except high jitter level conditions in audio–visual and audio–tactile combinations, where a bias for auditory signals was observed. We suggest that, in the production of repetitive timed actions, cues are optimally integrated in terms of both sensory and temporal reliability of events. However, when temporal discrepancy between cues is high they are treated independently, with movements timed to the cue with the highest sensory reliability.
- Published
- 2010
33. The quick and the dead: when reaction beats intention
- Author
-
Heinrich H. Bülthoff, R. Chris Miall, Malte R. Schomers, Andrew E. Welchman, and James Stanley
- Subjects
Adult ,Male ,Competitive Behavior ,Time Factors ,BF Psychology ,Adolescent ,Property (programming) ,Movement ,Poison control ,Intention ,action observation ,050105 experimental psychology ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Research articles ,Image Interpretation, Computer-Assisted ,Reaction Time ,Humans ,0501 psychology and cognitive sciences ,Motor skill ,General Environmental Science ,General Immunology and Microbiology ,Movement (music) ,05 social sciences ,Motor control ,General Medicine ,Neurophysiology ,Hand ,interpersonal competition ,Action (philosophy) ,Motor Skills ,Facilitation ,movement control ,Female ,General Agricultural and Biological Sciences ,Psychology ,Social psychology ,030217 neurology & neurosurgery ,Photic Stimulation ,Cognitive psychology - Abstract
Everyday behaviour involves a trade-off between planned actions and reaction to environmental events. Evidence from neurophysiology, neurology and functional brain imaging suggests different neural bases for the control of different movement types. Here we develop a behavioural paradigm to test movement dynamics for intentional versus reaction movements and provide evidence for a ‘reactive advantage’ in movement execution, whereby the same action is executed faster in reaction to an opponent. We placed pairs of participants in competition with each other to make a series of button presses. Within-subject analysis of movement times revealed a 10 per cent benefit for reactive actions. This was maintained when opponents performed dissimilar actions, and when participants competed against a computer, suggesting that the effect is not related to facilitation produced by action observation. Rather, faster ballistic movements may be a general property of reactive motor control, potentially providing a useful means of promoting survival.
- Published
- 2010
34. Perspective-Based Illusory Movement in a Flat Billboard—An Explanation
- Author
-
Andrew E. Welchman, Thomas V. Papathomas, and Zoe Kourtzi
- Subjects
Movement ,media_common.quotation_subject ,Illusion ,Experimental and Cognitive Psychology ,Motion (physics) ,Motion ,Illusory motion ,Advertising ,Artificial Intelligence ,Humans ,media_common ,Depth Perception ,Communication ,Optical Illusions ,Optical illusion ,business.industry ,Movement (music) ,Perspective (graphical) ,Models, Theoretical ,Sensory Systems ,Ophthalmology ,Pattern Recognition, Visual ,Cues ,Percept ,Depth perception ,business ,Psychology ,Cognitive psychology - Abstract
We describe a compelling motion illusion elicited by a huge billboard placed along a street, depicting a building that contains strong perspective cues. When observers move fast along the opposite sidewalk, they perceive the depicted building as rotating in their direction of travel. This is a special case of the ‘following’, or ‘pointing out of the picture’, illusion that elicits a strong illusory motion percept. Here we discuss the cause of the illusory motion and suggest that the brain relies on the depicted perspective cues to infer a 3-D shape and a concomitant motion that is incompatible with the physical pictorial surface.
- Published
- 2010
35. Extra-retinal signals support the estimation of 3D motion
- Author
-
Andrew E. Welchman, Eli Brenner, Julie M. Harris, Movement Behavior, and Research Institute MOVE
- Subjects
Male ,Vision Disparity ,SDG 16 - Peace ,Eye Movements ,genetic structures ,Computer science ,media_common.quotation_subject ,Motion Perception ,Extra-retinal ,Vergence ,Motion (physics) ,Motion-in-depth ,chemistry.chemical_compound ,Discrimination, Psychological ,Binocular disparity ,Perception ,Psychophysics ,Humans ,Computer vision ,media_common ,Communication ,business.industry ,SDG 16 - Peace, Justice and Strong Institutions ,Eye movement ,Retinal ,Justice and Strong Institutions ,Sensory Systems ,Ophthalmology ,chemistry ,Sensory Thresholds ,Female ,sense organs ,Artificial intelligence ,Cues ,business ,Binocular vision ,Photic Stimulation - Abstract
In natural settings, our eyes tend to track approaching objects. To estimate motion, the brain should thus take account of eye movements, perhaps using retinal cues (retinal slip of static objects) or extra-retinal signals (motor commands). Previous work suggests that extra-retinal ocular vergence signals do not support the perceptual judgments. Here, we re-evaluate this conclusion, studying motion judgments based on retinal slip and extra-retinal signals. We find that (1) each cue can be sufficient, and, (2) retinal and extra-retinal signals are combined, when estimating motion-in-depth. This challenges the accepted view that observers are essentially blind to eye vergence changes. © 2009 Elsevier Ltd. All rights reserved.
- Published
- 2009
36. Adaptive Estimation of Three-Dimensional Structure in the Human Brain
- Author
-
Tim J Preston, Zoe Kourtzi, and Andrew E. Welchman
- Subjects
Adult ,media_common.quotation_subject ,Young Adult ,Imaging, Three-Dimensional ,Form perception ,Perception ,Psychophysics ,Humans ,media_common ,Brain Mapping ,Communication ,Monocular ,business.industry ,General Neuroscience ,Brain ,fMRI adaptation ,Articles ,Adaptation, Physiological ,Form Perception ,Binocular disparity ,Kinetic depth effect ,business ,Depth perception ,Psychology ,Neuroscience ,Photic Stimulation - Abstract
Perceiving the three-dimensional (3D) properties of the environment relies on the brain bringing together ambiguous cues (e.g., binocular disparity, shading, texture) with information gained from short- and long-term experience. Perceptual aftereffects, in which the perception of an ambiguous 3D stimulus is biased away from the shape of a previously viewed stimulus, provide a sensitive means of probing this process, yet little is known about their neural basis. Here, we investigate 3D aftereffects using psychophysical and functional MRI (fMRI) adaptation paradigms to gain insight into the cortical circuits that mediate the perceptual interpretation of ambiguous depth signals. Using two classic bistable stimuli (Mach card, kinetic depth effect), we test aftereffects produced by 3D shapes defined by binocular (disparity) or monocular (texture, shading) depth cues. We show that the processing of ambiguous 3D stimuli in dorsal visual cortical areas (V3B/KO, V7) and posterior parietal regions is modulated by adaptation in line with perceptual aftereffects. Similar behavioral and fMRI adaptation effects for the two types of bistable stimuli suggest common neural substrates for depth aftereffects independent of the inducing depth cues (disparity, texture, shading). In line with current thinking about the role of adaptation in sensory optimization, our findings provide evidence that estimation of 3D shape in dorsal cortical areas takes account of the adaptive context to resolve depth ambiguity and interpret 3D structure.
- Published
- 2009
37. Being discrete helps keep to the beat
- Author
-
Andrew E. Welchman, Alan M. Wing, and Mark T. Elliott
- Subjects
Adult ,Male ,Analysis of Variance ,Communication ,business.industry ,Computer science ,General Neuroscience ,Beat (acoustics) ,Metronome ,Motor Activity ,Stimulus (physiology) ,Time perception ,law.invention ,Fingers ,Acoustic Stimulation ,law ,Adaptive behaviour ,Control theory ,Time Perception ,Humans ,Female ,Motor activity ,Error detection and correction ,business ,Psychomotor Performance - Abstract
Synchronizing our actions with external events is a task we perform without apparent effort. Its foundation relies on accurate temporal control that is widely accepted to take one of two different modes of implementation: explicit timing for discrete actions and implicit timing for smooth continuous movements. Here we assess synchronisation performance for different types of action and test the degree to which each action supports corrective updating following changes in the environment. Participants performed three different finger actions in time with an auditory pacing stimulus allowing us to assess synchronisation performance. Presenting a single perturbation to the otherwise regular metronome allowed us to examine corrections supported by movements varying in their mode of timing implementation. We find that discrete actions are less variable and support faster error correction. As such, discrete actions may be preferred when engaging in time-critical adaptive behaviour with people and objects in a dynamic environment.
- Published
- 2008
38. Multivoxel Pattern Selectivity for Perceptually Relevant Binocular Disparities in the Human Brain
- Author
-
Andrew E. Welchman, Tim J Preston, Shengqiao Li, and Zoe Kourtzi
- Subjects
Adult ,Vision Disparity ,genetic structures ,parasitic diseases ,medicine ,Humans ,Binocular neurons ,Visual Cortex ,Depth Perception ,Communication ,medicine.diagnostic_test ,business.industry ,General Neuroscience ,Brain ,Articles ,Magnetic Resonance Imaging ,Visual cortex ,medicine.anatomical_structure ,Stereopsis ,Random dot stereogram ,Multivariate Analysis ,Binocular disparity ,Depth perception ,business ,Psychology ,Functional magnetic resonance imaging ,Neuroscience ,Photic Stimulation - Abstract
Processing of binocular disparity is thought to be widespread throughout cortex, highlighting its importance for perception and action. Yet the computations and functional roles underlying this activity across areas remain largely unknown. Here, we trace the neural representations mediating depth perception across human brain areas using multivariate analysis methods and high-resolution imaging. Presenting disparity-defined planes, we determine functional magnetic resonance imaging (fMRI) selectivity to near versus far depth positions. First, we test the perceptual relevance of this selectivity, comparing the pattern-based decoding of fMRI responses evoked by random dot stereograms that support depth perception (correlated RDS) with the decoding of stimuli containing disparities to which the perceptual system is blind (anticorrelated RDS). Preferential disparity selectivity for correlated stimuli in dorsal (visual and parietal) areas and higher ventral area LO (lateral occipital area) suggests encoding of perceptually relevant information, in contrast to early (V1, V2) and intermediate ventral (V3v, V4) visual cortical areas that show similar selectivity for both correlated and anticorrelated stimuli. Second, manipulating disparity parametrically, we show that dorsal areas encode the metric disparity structure of the viewed stimuli (i.e., disparity magnitude), whereas ventral area LO appears to represent depth position in a categorical manner (i.e., disparity sign). Our findings suggest that activity in both visual streams is commensurate with the use of disparity for depth perception but the neural computations may differ. Intriguingly, perceptually relevant responses in the dorsal stream are tuned to disparity content and emerge at a comparatively earlier stage than categorical representations for depth position in the ventral stream. ispartof: Journal of Neuroscience vol:28 issue:44 pages:11315-27 ispartof: location:United States status: published
- Published
- 2008
39. Neural Correlates of Disparity-Defined Shape Discrimination in the Human Brain
- Author
-
Chandramouli Chandrasekaran, Johannes C. Dahmen, Andrew E. Welchman, Zoe Kourtzi, and Victor Canon
- Subjects
Adult ,genetic structures ,Physiology ,Brain mapping ,Discrimination, Psychological ,Form perception ,Parietal Lobe ,parasitic diseases ,Image Processing, Computer-Assisted ,Psychophysics ,medicine ,Humans ,Visual hierarchy ,Binocular neurons ,Visual Cortex ,Neurons ,Brain Mapping ,Communication ,Neural correlates of consciousness ,business.industry ,General Neuroscience ,Brain ,Form Perception ,Visual cortex ,medicine.anatomical_structure ,Binocular disparity ,business ,Psychology ,Neuroscience ,Algorithms ,Photic Stimulation - Abstract
Binocular disparity, the slight differences between the images registered by our two eyes, provides an important cue when estimating the three-dimensional (3D) structure of the complex environment we inhabit. Sensitivity to binocular disparity is evident at multiple levels of the visual hierarchy in the primate brain, from early visual cortex to parietal and temporal areas. However, the relationship between activity in these areas and key perceptual functions that exploit disparity information for 3D shape perception remains an important open question. Here we investigate the link between human cortical activity and the perception of disparity-defined shape, measuring fMRI responses concurrently with psychophysical shape judgments. We parametrically degraded the coherence of shapes by shuffling the spatial position of dots whose disparity defined the 3D structure and investigated the effect of this stimulus manipulation on both cortical activity and shape discrimination. We report significant relationships between shape coherence and fMRI response in both dorsal (V3, hMT+/V5) and ventral (LOC) visual areas that correspond to the observers' discrimination performance. In contrast to previous suggestions of a dichotomy of disparity-related processes in the ventral and dorsal streams, these findings are consistent with proposed interactions between these pathways that may mediate a continuum of processes important in perceiving 3D shape from coarse contour segmentation to fine curvature estimation.
- Published
- 2007
40. Look but don't touch: Visual cues to surface structure drive somatosensory cortex
- Author
-
Hua-Chun, Sun, Andrew E, Welchman, Dorita H F, Chang, and Massimiliano, Di Luca
- Subjects
Adult ,Male ,genetic structures ,Surface Properties ,fMRI ,Somatosensory Cortex ,Magnetic Resonance Imaging ,Roughness ,Article ,Visual material ,Young Adult ,Pattern Recognition, Visual ,MVPA ,Image Processing, Computer-Assisted ,Humans ,Female ,Cues ,Photic Stimulation ,Glossiness - Abstract
When planning interactions with nearby objects, our brain uses visual information to estimate shape, material composition, and surface structure before we come into contact with them. Here we analyse brain activations elicited by different types of visual appearance, measuring fMRI responses to objects that are glossy, matte, rough, or textured. In addition to activation in visual areas, we found that fMRI responses are evoked in the secondary somatosensory area (S2) when looking at glossy and rough surfaces. This activity could be reliably discriminated on the basis of tactile-related visual properties (gloss, rough, and matte), but importantly, other visual properties (i.e., coloured texture) did not substantially change fMRI activity. The activity could not be solely due to tactile imagination, as asking explicitly to imagine such surface properties did not lead to the same results. These findings suggest that visual cues to an object's surface properties evoke activity in neural circuits associated with tactile stimulation. This activation may reflect the a-priori probability of the physics of the interaction (i.e., the expectation of upcoming friction) that can be used to plan finger placement and grasp force., Highlights • Secondary somatosensory area responds to tactile-related visual properties. • Visual inputs are necessary to elicit this somatosensory activation. • This visual-somatosensory crossmodal network may facilitate action planning.
- Published
- 2015
41. fMRI Analysis-by-Synthesis Reveals a Dorsal Hierarchy That Extracts Surface Slant
- Author
-
Hiroshi, Ban and Andrew E, Welchman
- Subjects
Male ,Analysis of Variance ,Depth Perception ,Vision Disparity ,Universities ,Brain ,Reproducibility of Results ,Articles ,Magnetic Resonance Imaging ,Oxygen ,Discrimination, Psychological ,Orientation ,Image Processing, Computer-Assisted ,Psychophysics ,Humans ,Regression Analysis ,Female ,Visual Pathways ,Cues ,Students ,Photic Stimulation - Abstract
The brain's skill in estimating the 3-D orientation of viewed surfaces supports a range of behaviors, from placing an object on a nearby table, to planning the best route when hill walking. This ability relies on integrating depth signals across extensive regions of space that exceed the receptive fields of early sensory neurons. Although hierarchical selection and pooling is central to understanding of the ventral visual pathway, the successive operations in the dorsal stream are poorly understood. Here we use computational modeling of human fMRI signals to probe the computations that extract 3-D surface orientation from binocular disparity. To understand how representations evolve across the hierarchy, we developed an inference approach using a series of generative models to explain the empirical fMRI data in different cortical areas. Specifically, we simulated the responses of candidate visual processing algorithms and tested how well they explained fMRI responses. Thereby we demonstrate a hierarchical refinement of visual representations moving from the representation of edges and figure–ground segmentation (V1, V2) to spatially extensive disparity gradients in V3A. We show that responses in V3A are little affected by low-level image covariates, and have a partial tolerance to the overall depth position. Finally, we show that responses in V3A parallel perceptual judgments of slant. This reveals a relatively short computational hierarchy that captures key information about the 3-D structure of nearby surfaces, and more generally demonstrates an analysis approach that may be of merit in a diverse range of brain imaging domains.
- Published
- 2015
42. fMRI Activity in Posterior Parietal Cortex Relates to the Perceptual Use of Binocular Disparity for Both Signal-In-Noise and Feature Difference Tasks
- Author
-
Matthew L, Patten and Andrew E, Welchman
- Subjects
Adult ,Male ,Brain Mapping ,Depth Perception ,Vision Disparity ,genetic structures ,Signal-To-Noise Ratio ,behavioral disciplines and activities ,Magnetic Resonance Imaging ,Radiography ,Young Adult ,Parietal Lobe ,Image Processing, Computer-Assisted ,Humans ,Female ,psychological phenomena and processes ,Photic Stimulation ,Research Article - Abstract
Visually guided action and interaction depends on the brain’s ability to (a) extract and (b) discriminate meaningful targets from complex retinal inputs. Binocular disparity is known to facilitate this process, and it is an open question how activity in different parts of the visual cortex relates to these fundamental visual abilities. Here we examined fMRI responses related to performance on two different tasks (signal-in-noise “coarse” and feature difference “fine” tasks) that have been widely used in previous work, and are believed to differentially target the visual processes of signal extraction and feature discrimination. We used multi-voxel pattern analysis to decode depth positions (near vs. far) from the fMRI activity evoked while participants were engaged in these tasks. To look for similarities between perceptual judgments and brain activity, we constructed ‘fMR-metric’ functions that described decoding performance as a function of signal magnitude. Thereafter we compared fMR-metric and psychometric functions, and report an association between judged depth and fMRI responses in the posterior parietal cortex during performance on both tasks. This highlights common stages of processing during perceptual performance on these tasks.
- Published
- 2015
43. 7 tesla FMRI reveals systematic functional organization for binocular disparity in dorsal visual cortex
- Author
-
Nuno R, Goncalves, Hiroshi, Ban, Rosa M, Sánchez-Panchuelo, Susan T, Francis, Denis, Schluppeck, and Andrew E, Welchman
- Subjects
Adult ,Male ,Brain Mapping ,Vision Disparity ,genetic structures ,Articles ,Magnetic Resonance Imaging ,Oxygen ,Image Processing, Computer-Assisted ,Humans ,Female ,Photic Stimulation ,Probability ,Visual Cortex - Abstract
The binocular disparity between the views of the world registered by the left and right eyes provides a powerful signal about the depth structure of the environment. Despite increasing knowledge of the cortical areas that process disparity from animal models, comparatively little is known about the local architecture of stereoscopic processing in the human brain. Here, we take advantage of the high spatial specificity and image contrast offered by 7 tesla fMRI to test for systematic organization of disparity representations in the human brain. Participants viewed random dot stereogram stimuli depicting different depth positions while we recorded fMRI responses from dorsomedial visual cortex. We repeated measurements across three separate imaging sessions. Using a series of computational modeling approaches, we report three main advances in understanding disparity organization in the human brain. First, we show that disparity preferences are clustered and that this organization persists across imaging sessions, particularly in area V3A. Second, we observe differences between the local distribution of voxel responses in early and dorsomedial visual areas, suggesting different cortical organization. Third, using modeling of voxel responses, we show that higher dorsal areas (V3A, V3B/KO) have properties that are characteristic of human depth judgments: a simple model that uses tuning parameters estimated from fMRI data captures known variations in human psychophysical performance. Together, these findings indicate that human dorsal visual cortex contains selective cortical structures for disparity that may support the neural computations that underlie depth perception.
- Published
- 2015
44. fMRI Activity in Posterior Parietal Cortex Relates to the Perceptual Use of Binocular Disparity for Both Signal-In-Noise and Feature Difference Tasks
- Author
-
Matthew L Patten, Andrew E Welchman, Welchman, Andrew [0000-0002-7559-3299], and Apollo - University of Cambridge Repository
- Subjects
Adult ,Male ,Brain Mapping ,Depth Perception ,Vision Disparity ,genetic structures ,lcsh:R ,lcsh:Medicine ,Signal-To-Noise Ratio ,behavioral disciplines and activities ,Magnetic Resonance Imaging ,Radiography ,Young Adult ,Parietal Lobe ,Image Processing, Computer-Assisted ,Humans ,lcsh:Q ,Female ,lcsh:Science ,psychological phenomena and processes ,Photic Stimulation - Abstract
Visually guided action and interaction depends on the brain's ability to (a) extract and (b) discriminate meaningful targets from complex retinal inputs. Binocular disparity is known to facilitate this process, and it is an open question how activity in different parts of the visual cortex relates to these fundamental visual abilities. Here we examined fMRI responses related to performance on two different tasks (signal-in-noise "coarse" and feature difference "fine" tasks) that have been widely used in previous work, and are believed to differentially target the visual processes of signal extraction and feature discrimination. We used multi-voxel pattern analysis to decode depth positions (near vs. far) from the fMRI activity evoked while participants were engaged in these tasks. To look for similarities between perceptual judgments and brain activity, we constructed 'fMR-metric' functions that described decoding performance as a function of signal magnitude. Thereafter we compared fMR-metric and psychometric functions, and report an association between judged depth and fMRI responses in the posterior parietal cortex during performance on both tasks. This highlights common stages of processing during perceptual performance on these tasks.
- Published
- 2015
45. Human observers are biased in judging the angular approach of a projectile
- Author
-
Andrew E. Welchman, Julie M. Harris, and Val L Tuck
- Subjects
Adult ,Vision Disparity ,Adolescent ,Eye Movements ,Computer science ,Motion Perception ,Models, Biological ,Motion-in-depth ,Optics ,Looming ,Vision, Monocular ,Psychophysics ,Humans ,Computer vision ,Computer Simulation ,Stereopsis ,Depth Perception ,Vision, Binocular ,Projectile ,business.industry ,Angular displacement ,Disparity ,Object (philosophy) ,Sensory Systems ,Ophthalmology ,Trajectory ,Artificial intelligence ,Cues ,business ,Depth perception ,Binocular vision ,Mathematics - Abstract
How do we decide whether an object approaching us will hit us? Information in the optic array should provide information sufficient for us to determine the approaching trajectory of a projectile. However, observers reports of angular trajectories near the mid-sagittal plane have suggested that, when using binocular information, observers perceive trajectory angles as larger than they actually are (Harris Dean, 2003; J. Exp. Psych, in press). We examine the generality of this previous report by examining the perception of trajectory direction; first for computer rendered, stereoscopically presented, rich-cue objects, and then trajectory perception for real objects moving in the world. We find that, even under rich cue conditions and with real moving objects, observers show positive bias, overestimating the angle of approach when movement is near the mid-sagittal plane. The findings question whether the visual system can make explicit estimates of the 3-D location and movement of objects in depth.
- Published
- 2004
- Full Text
- View/download PDF
46. Is neural filling–in necessary to explain the perceptual completion of motion and depth information?
- Author
-
Andrew E. Welchman and Julie M. Harris
- Subjects
Visual perception ,genetic structures ,media_common.quotation_subject ,Models, Neurological ,Motion Perception ,Adaptation (eye) ,General Biochemistry, Genetics and Molecular Biology ,Motion (physics) ,Edge detection ,Perception ,Phenomenon ,Humans ,Computer vision ,General Environmental Science ,media_common ,Neurons ,Depth Perception ,General Immunology and Microbiology ,Filling-in ,business.industry ,Blind spot ,General Medicine ,Artificial intelligence ,General Agricultural and Biological Sciences ,Psychology ,business ,Photic Stimulation ,Research Article ,Cognitive psychology - Abstract
Retinal activity is the first stage of visual perception. Retinal sampling is non-uniform and not continuous, yet visual experience is not characterized by holes and discontinuities in the world. How does the brain achieve this perceptual completion? Fifty years ago, it was suggested that visual perception involves a two-stage process of (i) edge detection followed by (ii) neural filling-in of surface properties. We examine whether this general hypothesis can account for the specific example of perceptual completion of a small target surrounded by dynamic dots (an 'artificial scotoma'), a phenomenon argued to provide insight into the mechanisms responsible for perception. We degrade the target's borders using first blur and then depth continuity, and find that border degradation does not influence time to target disappearance. This indicates that important information for the continuity of target perception is conveyed at a coarse spatial scale. We suggest that target disappearance could result from adaptation that is not specific to borders, and question the need to hypothesize an active filling-in process to explain this phenomenon.
- Published
- 2003
47. Reviews: Color Perception: Philosophical, Psychological, Artistic, and Computational Perspectives, Visual Perception: An Introduction
- Author
-
Andrew E. Welchman and Marina Bloj
- Subjects
Cognitive science ,Ophthalmology ,Visual perception ,Artificial Intelligence ,Experimental and Cognitive Psychology ,Psychology ,Sensory Systems - Published
- 2002
48. Perceptual integration of depth cues is facilitated by inhibitory processing in dorsal visual cortex
- Author
-
Reuben Rideaux and Andrew E. Welchman
- Subjects
Dorsum ,Ophthalmology ,Communication ,Visual cortex ,medicine.anatomical_structure ,Perceptual integration ,business.industry ,medicine ,Depth perception ,Inhibitory postsynaptic potential ,business ,Psychology ,Sensory Systems - Published
- 2017
49. 'What not' encoding facilitates stereoscopic depth judgments
- Author
-
Andrew E. Welchman and Nuno Reis Goncalves
- Subjects
Ophthalmology ,Stereoscopic depth ,Computer science ,business.industry ,Encoding (memory) ,Computer vision ,Artificial intelligence ,business ,Sensory Systems - Published
- 2017
50. Optimized computation of binocular disparity by populations of simple and complex cells
- Author
-
Nuno Reis Goncalves and Andrew E. Welchman
- Subjects
Ophthalmology ,Optics ,business.industry ,Computer science ,Simple (abstract algebra) ,Computation ,Binocular disparity ,business ,Algorithm ,Sensory Systems - Published
- 2017
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.