16 results on '"Bülthoff, Heinrich H."'
Search Results
2. The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch
- Author
-
Wallraven, Christian, Bülthoff, Heinrich H., Waterkamp, Steffen, van Dam, Loes, and Gaißert, Nina
- Published
- 2014
- Full Text
- View/download PDF
3. Visual, haptic and crossmodal recognition of scenes
- Author
-
Newell, Fiona N., Woods, Andrew T., Mernagh, Marion, and Bülthoff, Heinrich H.
- Published
- 2005
- Full Text
- View/download PDF
4. Multisensory Interactions in Head and Body Centered Perception of Verticality.
- Author
-
De Winkel, Ksander N., Edel, Ellen, Happee, Riender, and Bülthoff, Heinrich H.
- Subjects
BODY image ,VECTION ,SNOEZELEN ,VISUAL perception ,HEAD ,VISUALIZATION - Abstract
Percepts of verticality are thought to be constructed as a weighted average of multisensory inputs, but the observed weights differ considerably between studies. In the present study, we evaluate whether this can be explained by differences in how visual, somatosensory and proprioceptive cues contribute to representations of the Head In Space (HIS) and Body In Space (BIS). Participants (10) were standing on a force plate on top of a motion platform while wearing a visualization device that allowed us to artificially tilt their visual surroundings. They were presented with (in)congruent combinations of visual, platform, and head tilt, and performed Rod & Frame Test (RFT) and Subjective Postural Vertical (SPV) tasks. We also recorded postural responses to evaluate the relation between perception and balance. The perception data shows that body tilt, head tilt, and visual tilt affect the HIS and BIS in both experimental tasks. For the RFT task, visual tilt induced considerable biases (≈ 10° for 36° visual tilt) in the direction of the vertical expressed in the visual scene; for the SPV task, participants also adjusted platform tilt to correct for illusory body tilt induced by the visual stimuli, but effects were much smaller (≈ 0.25°). Likewise, postural data from the SPV task indicate participants slightly shifted their weight to counteract visual tilt (0.3° for 36° visual tilt). The data reveal a striking dissociation of visual effects between the two tasks. We find that the data can be explained well using a model where percepts of the HIS and BIS are constructed from direct signals from head and body sensors, respectively, and indirect signals based on body and head signals but corrected for perceived neck tilt. These findings show that perception of the HIS and BIS derive from the same sensory signals, but see profoundly different weighting factors. We conclude that observations of different weightings between studies likely result from querying of distinct latent constructs referenced to the body or head in space. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Vection is the main contributor to motion sickness induced by visual yaw rotation: Implications for conflict and eye movement theories
- Author
-
Nooij, Suzanne A. E., Pretto, Paolo, Oberfeld, Daniel, Hecht, Heiko, and Bülthoff, Heinrich H.
- Subjects
Adult ,Male ,Eye Movements ,Rotation ,Vision ,Physiology ,Visual System ,Motion Sickness ,Cognitive Neuroscience ,Sensory Physiology ,Acceleration ,lcsh:Medicine ,Social Sciences ,Geometry ,Models, Biological ,Motor Reactions ,Motion ,Young Adult ,Ocular System ,Medicine and Health Sciences ,Psychology ,Humans ,lcsh:Science ,Musculoskeletal System ,Nystagmus, Optokinetic ,Physics ,lcsh:R ,Biology and Life Sciences ,Classical Mechanics ,Middle Aged ,Sensory Systems ,Postural Control ,Ellipses ,Head Movements ,Physical Sciences ,Visual Perception ,Eyes ,Cognitive Science ,lcsh:Q ,Sensory Perception ,Female ,Anatomy ,Head ,Mathematics ,Research Article ,Neuroscience - Abstract
This study investigated the role of vection (i.e., a visually induced sense of self-motion), optokinetic nystagmus (OKN), and inadvertent head movements in visually induced motion sickness (VIMS), evoked by yaw rotation of the visual surround. These three elements have all been proposed as contributing factors in VIMS, as they can be linked to different motion sickness theories. However, a full understanding of the role of each factor is still lacking because independent manipulation has proven difficult in the past. We adopted an integrative approach to the problem by obtaining measures of potentially relevant parameters in four experimental conditions and subsequently combining them in a linear mixed regression model. To that end, participants were exposed to visual yaw rotation in four separate sessions. Using a full factorial design, the OKN was manipulated by a fixation target (present/absent), and vection strength by introducing a conflict in the motion direction of the central and peripheral field of view (present/absent). In all conditions, head movements were minimized as much as possible. Measured parameters included vection strength, vection variability, OKN slow phase velocity, OKN frequency, the number of inadvertent head movements, and inadvertent head tilt. Results show that VIMS increases with vection strength, but that this relation varies among participants (R2 = 0.48). Regression parameters for vection variability, head and eye movement parameters were not significant. These results may seem to be in line with the Sensory Conflict theory on motion sickness, but we argue that a more detailed definition of the exact nature of the conflict is required to fully appreciate the relationship between vection and VIMS.
- Published
- 2017
6. Causal Inference in Multisensory Heading Estimation
- Author
-
de Winkel, Ksander N., Katliar, Mikhail, and Bülthoff, Heinrich H.
- Subjects
Central Nervous System ,Adult ,Male ,Inertia ,Vision ,Acceleration ,Models, Neurological ,Velocity ,lcsh:Medicine ,Social Sciences ,Nervous System ,Motion ,Young Adult ,Medicine and Health Sciences ,Psychology ,Humans ,lcsh:Science ,Physics ,lcsh:R ,Classical Mechanics ,Biology and Life Sciences ,Probability Theory ,Statistical Dispersion ,Physical Sciences ,Visual Perception ,lcsh:Q ,Sensory Perception ,Female ,Anatomy ,Mathematics ,Algorithms ,Photic Stimulation ,Research Article ,Neuroscience ,Statistical Distributions - Abstract
A large body of research shows that the Central Nervous System (CNS) integrates multisensory information. However, this strategy should only apply to multisensory signals that have a common cause; independent signals should be segregated. Causal Inference (CI) models account for this notion. Surprisingly, previous findings suggested that visual and inertial cues on heading of self-motion are integrated regardless of discrepancy. We hypothesized that CI does occur, but that characteristics of the motion profiles affect multisensory processing. Participants estimated heading of visual-inertial motion stimuli with several different motion profiles and a range of intersensory discrepancies. The results support the hypothesis that judgments of signal causality are included in the heading estimation process. Moreover, the data suggest a decreasing tolerance for discrepancies and an increasing reliance on visual cues for longer duration motions.
- Published
- 2017
7. A Biologically-Inspired Model to Predict Perceived Visual Speed as a Function of the Stimulated Portion of the Visual Field.
- Author
-
Solari, Fabio, Caramenti, Martina, Chessa, Manuela, Pretto, Paolo, Bülthoff, Heinrich H., and Bresciani, Jean-Pierre
- Subjects
MOTION perception (Vision) ,OPTICAL flow ,VISUAL pathways ,SPEED ,TIME perception - Abstract
Spatial orientation relies on a representation of the position and orientation of the body relative to the surrounding environment. When navigating in the environment, this representation must be constantly updated taking into account the direction, speed, and amplitude of body motion. Visual information plays an important role in this updating process, notably via optical flow. Here, we systematically investigated how the size and the simulated portion of the field of view (FoV) affect perceived visual speed of human observers. We propose a computational model to account for the patterns of human data. This model is composed of hierarchical cells' layers that model the neural processing stages of the dorsal visual pathway. Specifically, we consider that the activity of the MT area is processed by populations of modeled MST cells that are sensitive to the differential components of the optical flow, thus producing selectivity for specific patterns of optical flow. Our results indicate that the proposed computational model is able to describe the experimental evidence and it could be used to predict expected biases of speed perception for conditions in which only some portions of the visual field are visible. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. Action can amplify motion-induced illusory displacement
- Author
-
Caniard, Franck, Bülthoff, Heinrich H., and Thornton, Ian M.
- Subjects
active vision ,closed-loop control ,game psychophysics ,Vision ,Virtual reality ,Field dependence-independence ,mobile devices ,Behavioral Neuroscience ,Psychiatry and Mental health ,Neuropsychology and Physiological Psychology ,perception-action ,Neurology ,local-global motion ,motion-induced position shifts ,motion illusions ,Motion perception (Vision) ,Original Research Article ,Biological Psychiatry ,Neuroscience - Abstract
Local motion is known to produce strong illusory displacement in the perceived position of globally static objects. For example, if a dot-cloud or grating drifts to the left within a stationary aperture, the perceived position of the whole aperture will also be shifted to the left. Previously, we used a simple tracking task to demonstrate that active control over the global position of an object did not eliminate this form of illusion. Here, we used a new iPad task to directly compare the magnitude of illusory displacement under active and passive conditions. In the active condition, participants guided a drifting Gabor patch along a virtual slalom course by using the tilt control of an iPad. The task was to position the patch so that it entered each gate at the direct center, and we used the left/right deviations from that point as our dependent measure. In the passive condition, participants watched playback of standardized trajectories along the same course. We systematically varied deviation from midpoint at gate entry, and participants made 2AFC left/right judgments. We fitted cumulative normal functions to individual distributions and extracted the point of subjective equality (PSE) as our dependent measure. To our surprise, the magnitude of displacement was consistently larger under active than under passive conditions. Importantly, control conditions ruled out the possibility that such amplification results from lack of motor control or differences in global trajectories as performance estimates were equivalent in the two conditions in the absence of local motion. Our results suggest that the illusion penetrates multiple levels of the perception-action cycle, indicating that one important direction for the future of perceptual illusions may be to more fully explore their influence during active vision., peer-reviewed
- Published
- 2015
9. Modulation of vection latencies in the full-body illusion.
- Author
-
Nesti, Alessandro, Rognini, Giulio, Herbelin, Bruno, Bülthoff, Heinrich H., Chuang, Lewis, and Blanke, Olaf
- Subjects
VECTION ,ILLUSION (Philosophy) ,SELF-consciousness (Awareness) ,VIRTUAL reality ,VESTIBULAR apparatus - Abstract
Current neuroscientific models of bodily self-consciousness (BSC) argue that inaccurate integration of sensory signals leads to altered states of BSC. Indeed, using virtual reality technology, observers viewing a fake or virtual body while being exposed to tactile stimulation of the real body, can experience illusory ownership over–and mislocalization towards—the virtual body (Full-Body Illusion, FBI). Among the sensory inputs contributing to BSC, the vestibular system is believed to play a central role due to its importance in estimating self-motion and orientation. This theory is supported by clinical evidence that vestibular loss patients are more prone to altered BSC states, and by recent experimental evidence that visuo-vestibular conflicts can disrupt BSC in healthy individuals. Nevertheless, the contribution of vestibular information and self-motion perception to BSC remains largely unexplored. Here, we investigate the relationship between alterations of BSC and self-motion sensitivity in healthy individuals. Fifteen participants were exposed to visuo-vibrotactile conflicts designed to induce an FBI, and subsequently to visual rotations that evoked illusory self-motion (vection). We found that synchronous visuo-vibrotactile stimulation successfully induced the FBI, and further observed a relationship between the strength of the FBI and the time necessary for complete vection to arise. Specifically, higher self-reported FBI scores across synchronous and asynchronous conditions were associated to shorter vection latencies. Our findings are in agreement with clinical observations that vestibular loss patients have higher FBI susceptibility and lower vection latencies, and argue for increased visual over vestibular dependency during altered states of BSC. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. Cultural differences in room size perception.
- Author
-
Saulton, Aurelie, Bülthoff, Heinrich H., de la Rosa, Stephan, and Dodds, Trevor J.
- Subjects
- *
SPACE perception , *CROSS-cultural differences , *COGNITIVE ability , *VISUAL perception , *NEUROSCIENCES , *PSYCHOLOGY - Abstract
Cultural differences in spatial perception have been little investigated, which gives rise to the impression that spatial cognitive processes might be universal. Contrary to this idea, we demonstrate cultural differences in spatial volume perception of computer generated rooms between Germans and South Koreans. We used a psychophysical task in which participants had to judge whether a rectangular room was larger or smaller than a square room of reference. We systematically varied the room rectangularity (depth to width aspect ratio) and the viewpoint (middle of the short wall vs. long wall) from which the room was viewed. South Koreans were significantly less biased by room rectangularity and viewpoint than their German counterparts. These results are in line with previous notions of general cognitive processing strategies being more context dependent in East Asian societies than Western ones. We point to the necessity of considering culturally-specific cognitive processing strategies in visual spatial cognition research. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
11. Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation.
- Author
-
Nesti, Alessandro, de Winkel, Ksander, and Bülthoff, Heinrich H.
- Subjects
STIMULUS & response (Biology) ,CENTRAL nervous system ,SENSORY neurons ,COGNITIVE psychology ,SENSORY perception ,DRIFT diffusion models - Abstract
While moving through the environment, our central nervous system accumulates sensory information over time to provide an estimate of our self-motion, allowing for completing crucial tasks such as maintaining balance. However, little is known on how the duration of the motion stimuli influences our performances in a self-motion discrimination task. Here we study the human ability to discriminate intensities of sinusoidal (0.5 Hz) self-rotations around the vertical axis (yaw) for four different stimulus durations (1, 2, 3 and 5 s) in darkness. In a typical trial, participants experienced two consecutive rotations of equal duration and different peak amplitude, and reported the one perceived as stronger. For each stimulus duration, we determined the smallest detectable change in stimulus intensity (differential threshold) for a reference velocity of 15 deg/s. Results indicate that differential thresholds decrease with stimulus duration and asymptotically converge to a constant, positive value. This suggests that the central nervous system accumulates sensory information on self-motion over time, resulting in improved discrimination performances. Observed trends in differential thresholds are consistent with predictions based on a drift diffusion model with leaky integration of sensory evidence. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
12. Similarity and categorization: From vision to touch
- Author
-
Gaißert, Nina, Bülthoff, Heinrich H., and Wallraven, Christian
- Subjects
- *
SIMILARITY (Psychology) , *CATEGORIZATION (Psychology) , *PERCEPTUAL learning , *MULTIDIMENSIONAL scaling , *VISUAL learning , *SENSORY perception , *TOUCH , *EXPERIMENTAL psychology - Abstract
Abstract: Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
13. The Role of Stereo Vision in Visual-Vestibular Integration.
- Author
-
Butler, John S., Campos, Jennifer L., Bülthoff, Heinrich H., and Smith, Stuart T.
- Subjects
VISION ,VESTIBULAR apparatus ,SENSES ,ESTIMATION theory ,VISUAL perception ,SENSORY perception - Abstract
Self-motion through an environment stimulates several sensory systems, including the visual system and the vestibular system. Recent work in heading estimation has demonstrated that visual and vestibular cues are typically integrated in a statistically optimal manner, consistent with Maximum Likelihood Estimation predictions. However, there has been some indication that cue integration may be affected by characteristics of the visual stimulus. Therefore, the current experiment evaluated whether presenting optic flow stimuli stereoscopically, or presenting both eyes with the same image (binocularly) affects combined visual-vestibular heading estimates. Participants performed a two-interval forced-choice task in which they were asked which of two presented movements was more rightward. They were presented with either visual cues alone, vestibular cues alone or both cues combined. Measures of reliability were obtained for both binocular and stereoscopic conditions. Group level analyses demonstrated that when stereoscopic information was available there was clear evidence of optimal integration, yet when only binocular information was available weaker evidence of cue integration was observed. Exploratory individual analyses demonstrated that for the stereoscopic condition 90% of participants exhibited optimal integration, whereas for the binocular condition only 60% of participants exhibited results consistent with optimal integration. Overall, these findings suggest that stereo vision may be important for self-motion perception, particularly under combined visual-vestibular conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
14. Storing upright turns: how visual and vestibular cues interact during the encoding and recalling process.
- Author
-
Vidal, Manuel and Bülthoff, Heinrich H.
- Subjects
- *
MEMORIZATION , *MODALITY (Theory of knowledge) , *VESTIBULAR apparatus , *VISUALIZATION , *VISION - Abstract
Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
15. Merging the senses into a robust percept
- Author
-
Ernst, Marc O. and Bülthoff, Heinrich H.
- Subjects
- *
BRAIN , *SENSES , *TOUCH , *VISION , *AUDITIONS - Abstract
To perceive the external environment our brain uses multiple sources of sensory information derived from several different modalities, including vision, touch and audition. All these different sources of information have to be efficiently merged to form a coherent and robust percept. Here we highlight some of the mechanisms that underlie this merging of the senses in the brain. We show that, depending on the type of information, different combination and integration strategies are used and that prior knowledge is often required for interpreting the sensory signals. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
16. Touch can change visual slant perception.
- Author
-
Ernst, Marc O., Banks, Martin S., and Bülthoff, Heinrich H.
- Subjects
VISUAL perception ,TOUCH ,VISION - Abstract
The visual system uses several signals to deduce the three-dimensional structure of the environment, including binocular disparity, texture gradients, shading and motion parallax. Although each of these sources of information is independently insufficient to yield reliable three-dimensional structure from everyday scenes, the visual system combines them by weighting the available information; altering the weights would therefore change the perceived structure. We report that haptic feedback (active touch) increases the weight of a consistent surface-slant signal relative to inconsistent signals. Thus, appearance of a subsequently viewed surface is changed: the surface appears slanted in the direction specified by the haptically reinforced signal. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.