152 results on '"Geoffrey P Bingham"'
Search Results
2. Control of visually guided braking using constant-$$\tau$$ and proportional rate
- Author
-
Didem Kadihasanoglu, Randall D. Beer, Geoffrey P. Bingham, Ned Bingham, TOBB ETU, Faculty of Science and Literature, Department of Psychology, TOBB ETÜ, Fen Edebiyat Fakültesi, Psikoloji Bölümü, and Kadıhasanoğlu, Didem
- Subjects
Proportional rate control ,Visually guided braking ,General Neuroscience ,05 social sciences ,Control (management) ,Constant tau strategy ,050105 experimental psychology ,Information-based control ,03 medical and health sciences ,Task (computing) ,0302 clinical medicine ,Control theory ,Constant tau-dot strategy ,Joystick ,Brake ,Path (graph theory) ,Range (statistics) ,0501 psychology and cognitive sciences ,Affordance-based control ,Constant (mathematics) ,Set (psychology) ,030217 neurology & neurosurgery ,Mathematics - Abstract
This study investigated the optical information and control strategies used in visually guided braking. In such tasks, drivers exhibit two different braking behaviors: impulsive braking and continuously regulated braking. We designed two experiments involving a simulated braking task to investigate these two behaviors. Participants viewed computer displays simulating an approach along a linear path over a textured ground surface toward a set of road signs. The task was to use a joystick as a brake to stop as close as possible to the road signs. Our results showed that participants relied on a weak constant- $$\tau$$ strategy (Bingham 1995) when regulating the brake impulsively. They used discrete $$\tau$$ values as critical values and they regulated the brake so as not to let $$\tau$$ fall below these values. Our results also showed that proportional rate control (Anderson and Bingham 2010, 2011) is used in continuously regulated braking. Participants initiated braking at a certain proportional rate value and controlled braking so as to maintain that value constant during the approach. Proportional rate control is robust because the value can fluctuate within a range to yield good performance. We argue that proportional rate control unifies the information-based approach and affordance-based approach to visually guided braking.
- Published
- 2020
3. Stable visually guided reaching does not require an internal feedforward model to compensate for internal delay: Data and model
- Author
-
Geoffrey P. Bingham, Xiaoye Michael Wang, and Rachel A. Herth
- Subjects
Ophthalmology ,Sensory Systems - Abstract
Visually guided reaches are performed in ≈1s. Given unstable feedback control with neural transmission delay, stable visually guided reaching is assumed to require internal feedforward models that generate simulated feedback without delay that combines with actual feedback for stability. We investigated whether stable visually guided reaching requires internal models to handle such delay. Participants performed rapid targeted reaches in a virtual environment with different mappings between speeds of the hand and hand avatar. First, participants reached with visual guidance and constant mapping. Second, feedforward reaches were performed with constant mapping and hand avatar only visible at reach start and end. Reaches were accurate. Third, participants performed reaches with visual guidance and different mappings every trial. We expected performance as in the first condition. Finally, feedforward reaches with variable mapping yielded large errors showing visual guidance in the previous condition was successful despite an ineffective internal model. We simulated reaches using a proportional rate model with disparity Tau controlling the virtual Equilibrium Point in an Equilibrium Point (EP) model. The time dimensioned information and dynamic remained stable with delayed feedback. Finally, we fit movement times using the proportional rate EP model with 0msec, 50msec, and 100msec delay. With the fitted model parameters, we compared the model reach trajectories with the behavioral trajectories. Stable visually guided reaching did not require an internal feedforward model.
- Published
- 2021
4. Investigation of optical texture properties as relative distance information for monocular guidance of reaching
- Author
-
Geoffrey P, Bingham, Rachel A, Herth, Pin, Yang, Zhongting, Chen, and Xiaoye Michael, Wang
- Subjects
Depth Perception ,Vision, Binocular ,Ophthalmology ,Vision, Monocular ,Distance Perception ,Humans ,Sensory Systems - Abstract
Reaches guided using monocular versus binocular vision have been found to be equally fast and accurate only when optical texture was available projected from a support surface across which the reach was performed. We now investigate what property of optical texture elements is used to perceive relative distance: image width, image height, or image shape. Participants performed reaches to match target distances. Targets appeared on a textured surface on the left and participants reached to place their hand at target distance along a surface on the right. A perturbation discriminated which texture property was being used. The righthand surface was higher than the lefthand one by either 2, 4 or 6 cm. Participants should overshoot if they matched texture image width at the target, undershoot if they matched image shape, and undershoot far distances and, depending on the overall eye height, overshoot near distances if they matched image height. In Experiment 1, participants reached by moving a joystick to control a hand avatar in a virtual environment display. Their eye height was 15 cm. For each texture property, distances were predicted from the viewing geometry. Results ruled out image width in favor of image height or shape. In Experiment 2, participants at a 50 cm eye height reached in an actual environment with the same manipulations. Results supported use of image shape (or foreshortening), consistent with findings of texture properties used in slant perception. We discuss implications for models of visually guided reaching.
- Published
- 2022
5. The effect of movement frequency on perceptual-motor learning of a novel bimanual coordination pattern
- Author
-
Shaochen Huang, Jacob Layer, Derek Smith, Geoffrey P. Bingham, and Qin Zhu
- Subjects
Movement ,Biophysics ,Humans ,Learning ,Experimental and Cognitive Psychology ,Orthopedics and Sports Medicine ,General Medicine ,Kinesthesis ,Psychomotor Performance - Abstract
The most widely known studies of rhythmic limb coordination showed that frequency strongly affects the stability of some coordinations (e.g. 180° relative phase) but not others (e.g. 0°). The coupling of such rhythmic limb movements was then shown to be perceptual. Frequency affected the stability of perceptual information. We now investigated whether frequency would impact the pickup of information for learning a novel bimanual coordination pattern (e.g. 90°) and the ability to sustain the coordination at various frequencies. Twenty participants were recruited and assessed on their performance of bimanual coordination at 0°, 180°, and 90° at five scanning frequencies before and after training at 90°, during which they were assigned to practice with either a high (2.5 Hz) or low (0.5 Hz) frequency until attaining proficiency. The results showed that learning was frequency specific. The best post-training performance occurred at the trained frequency. Although the coordination could be acquired through high frequency training, it was at the cost of a greater amount of training and most surprisingly, did not yield improved performance at lower frequencies that are normally thought to be easier. The findings suggest that movement frequency may determine whether visual or kinesthetic information is used for learning and control of bimanual coordination.
- Published
- 2021
6. Training 90° bimanual coordination at high frequency yields dependence on kinesthetic information and poor performance of dyadic unimanual coordination
- Author
-
Jacob S. Layer, Geoffrey P. Bingham, Qin Zhu, Shaochen Huang, and Derek T. Smith
- Subjects
medicine.medical_specialty ,Movement ,Biophysics ,Kinesthetic learning ,Experimental and Cognitive Psychology ,General Medicine ,Audiology ,Functional Laterality ,Article ,Task (project management) ,Rhythm ,medicine ,Humans ,Learning ,Orthopedics and Sports Medicine ,Psychology ,Kinesthesis ,Psychomotor Performance - Abstract
Two groups of participants were trained to be proficient at performing bimanual 90° coordination either at a high (2.5 Hz) or low (0.5 Hz) frequency with both kinesthetic and visual information available. At high frequency, participants trained for twice as long to achieve performance comparable to participants training at low frequency. Participants were then paired within (low-low or high-high) or between (low-high) frequency groups to perform a visually coupled dyadic unimanual 90° coordination task, during which they were free to settle at any jointly determined frequency to synchronize their rhythmic movements. The results showed that the coordination skill was frequency-specific. For dyads with one or both members who had learned the 90° bimanual coordination at low frequency, the performance settled at a low frequency (≈0.5 Hz) with more successfully synchronized trials. For dyads with both members who had learned the 90° bimanual coordination at high frequency, they struggled with the task and performed poorly. The dyadic coordination settled at a higher frequency (≈1.5 Hz) on average, but with twice the variability in settling frequency and significantly fewer synchronized trials. The difference between the dyadic coordination and bimanual tasks was that only visual information was available to couple the movements in the former while both kinesthetic and visual information were available in the latter. Therefore, the high frequency group must have relied on kinesthetic information to perform both coordination tasks while the low frequency group was well able to use visual information for both. In the mixed training pairs, the low frequency trained member of the pair was likely responsible for the better performance. These conclusions were consistent with results of previous studies.
- Published
- 2021
7. The role of intentionality in the performance of a learned 90° bimanual rhythmic coordination during frequency scaling: data and model
- Author
-
Qin Zhu, Geoffrey P. Bingham, and Rachel A. Herth
- Subjects
Spontaneous transition ,Movement (music) ,General Neuroscience ,Movement ,Relative direction ,Rhythm ,Control theory ,Perceptual learning ,Mode switching ,Humans ,Learning ,Relative phase ,Frequency scaling ,Psychomotor Performance - Abstract
Two rhythmic coordinations, 0° and 180° relative phase, can be performed stably at preferred frequency (~ 1 Hz) without training. Evidence indicates that both 0° and 180° coordination entail detection of the relative direction of movement. At higher frequencies, this yields instability of 180° and spontaneous transition to 0°. The ability to perform a 90° coordination can be acquired by learning to detect and use relative position as information. We now investigate the skilled performance of 90° bimanual coordination with frequency scaling and whether 90° coordination exhibits mode switching to 0° or 180° at higher frequencies. Unlike the switching from 180° to 0°, a transition from the learned 90° coordination to the intrinsic 0° or 180° modes would entail a change in information. This would seem to require intentional decisions during performance as would correcting performance that had strayed from 90°. Relatedly, correction would seem to be an intrinsic part of the performance of 90° during learning. We investigated whether it remains so. We tested bimanual coordination at 90° under both noninterference and correcting instructions. Under correcting instructions, bimanual 90° coordination remained stable at both low and high frequencies. Noninterference instructions yielded stable performance at lower frequencies and switching to 0° or 180° at higher frequencies. Thus, correction is optional and switching to the intrinsic modes occurred. We extended the Bingham (Ecol Psychol 16:45–53, 2004a; Advances in psychology, vol. 135, Time-to-contact, Elsevier Science Publishers, 2004b) model for 0° and 180° coordination to create a dynamical, perception–action account of learned 90° bimanual coordination, in which mode switching and correction were both initiated as the information required for performance of 90° fell below threshold. This means that intentional decisions about what coordination to perform and whether to correct occurred only before performance was begun, not during performance. The extended strictly dynamical model was successfully used to simulate performance of participants in the experiments.
- Published
- 2021
8. Monocular guidance of reaches-to-grasp using visible support surface texture: data and model
- Author
-
Olivia Cherry, Geoffrey P. Bingham, Xiaoye Michael Wang, and Rachel A. Herth
- Subjects
Surface (mathematics) ,Texture (music) ,050105 experimental psychology ,03 medical and health sciences ,0302 clinical medicine ,Vision, Monocular ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,Mathematics ,Vision, Binocular ,Monocular ,Hand Strength ,business.industry ,General Neuroscience ,05 social sciences ,GRASP ,Biomechanical Phenomena ,Binocular disparity ,Artificial intelligence ,Support surface ,business ,Binocular vision ,Monocular vision ,030217 neurology & neurosurgery ,Psychomotor Performance - Abstract
We investigated monocular information for the continuous online guidance of reaches-to-grasp and present a dynamical control model thereof. We defined an information variable using optical texture projected from a support surface (i.e. a table) over which the participants reached-to-grasp target objects sitting on the table surface at different distances. Using either binocular or monocular vision in the dark, participants rapidly reached-to-grasp a phosphorescent square target object with visibly phosphorescent thumb and index finger. Targets were one of three sizes. The target either sat flat on the support surface or was suspended a few centimeters above the surface at a slant. The later condition perturbed the visible relation of the target to the support surface. The support surface was either invisible in the dark or covered with a visible phosphorescent checkerboard texture. Reach-to-grasp trajectories were recorded and Maximum Grasp Apertures (MGA), Movement Times (MT), Time of MGA (TMGA), and Time of Peak Velocities (TPV) were analyzed. These measures were selected as most indicative of the participant’s certainty about the relation of hand to target object during the reaches. The findings were that, in general, especially monocular reaches were less certain (slower, earlier TMGA and TPV) than binocular reaches except with the target flat on the visible support surface where performance with monocular and binocular vision was equivalent. The hypothesized information was the difference in image width of optical texture (equivalent to density of optical texture) at the hand versus the target. A control dynamic equation was formulated representing proportional rate control of the reaches-to-grasp (akin to the model using binocular disparity formulated by Anderson and Bingham (Exp Brain Res 205: 291–306, 2010). Simulations were performed and presented using this model. Simulated performance was compared to actual performance and found to replicate it. To our knowledge, this is the first study of monocular information used for continuous online guidance of reaches-to-grasp, complete with a control dynamic model.
- Published
- 2020
9. Information for perceiving blurry events: Optic flow and color are additive
- Author
-
Hongge Xu, Geoffrey P. Bingham, Xiaoye Michael Wang, and Jing Samantha Pan
- Subjects
Linguistics and Language ,Eye Movements ,Computer science ,Motion Perception ,Color ,Experimental and Cognitive Psychology ,Optic Flow ,Grayscale ,050105 experimental psychology ,Language and Linguistics ,03 medical and health sciences ,0302 clinical medicine ,Static image ,Humans ,0501 psychology and cognitive sciences ,Event perception ,Computer vision ,business.industry ,05 social sciences ,Eye movement ,Sensory Systems ,Improved performance ,Identification (information) ,Flow (mathematics) ,Visual Perception ,RGB color model ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Information used in visual event perception includes both static image structure projected from opaque object surfaces and dynamic optic flow generated by motion. Events presented in static blurry grayscale displays have been shown to be recognized only when and after presented with optic flow. In this study, we investigate the effects of optic flow and color on identifying blurry events by studying the identification accuracy and eye-movement patterns. Three types of color displays were tested: grayscale, original colors, or rearranged colors (where the RGB values of the original colors were adjusted). In each color condition, participants identified 12 blurry events in five experimental phases. In the first two phases, static blurry images were presented alone or sequentially with a motion mask between consecutive frames, and identification was poor. In Phase 3, where optic flow was added, identification was comparably good. In Phases 4 and 5, motion was removed, but identification remained good. Thus, optic flow improved event identification during and after its presentation. Color also improved performance, where participants were consistently better at identifying color displays than grayscale or rearranged color displays. Importantly, the effects of optic flow and color were additive. Finally, in both motion and postmotion phases, a significant portion of eye fixations fell in strong optic flow areas, suggesting that participants continued to look where flow was available even after it stopped. We infer that optic flow specified depth structure in the blurry image structure and yielded an improvement in identification from static blurry images.
- Published
- 2020
10. Control of visually guided braking using constant-[Formula: see text] and proportional rate
- Author
-
Didem, Kadihasanoglu, Randall D, Beer, Ned, Bingham, and Geoffrey P, Bingham
- Subjects
Automobile Driving ,Deceleration ,Humans - Abstract
This study investigated the optical information and control strategies used in visually guided braking. In such tasks, drivers exhibit two different braking behaviors: impulsive braking and continuously regulated braking. We designed two experiments involving a simulated braking task to investigate these two behaviors. Participants viewed computer displays simulating an approach along a linear path over a textured ground surface toward a set of road signs. The task was to use a joystick as a brake to stop as close as possible to the road signs. Our results showed that participants relied on a weak constant-[Formula: see text] strategy (Bingham 1995) when regulating the brake impulsively. They used discrete [Formula: see text] values as critical values and they regulated the brake so as not to let [Formula: see text] fall below these values. Our results also showed that proportional rate control (Anderson and Bingham 2010, 2011) is used in continuously regulated braking. Participants initiated braking at a certain proportional rate value and controlled braking so as to maintain that value constant during the approach. Proportional rate control is robust because the value can fluctuate within a range to yield good performance. We argue that proportional rate control unifies the information-based approach and affordance-based approach to visually guided braking.
- Published
- 2020
11. Large continuous perspective change with noncoplanar points enables accurate slant perception
- Author
-
Xiaoye Michael Wang, Mats Lind, and Geoffrey P. Bingham
- Subjects
Adult ,Male ,media_common.quotation_subject ,Motion Perception ,Structure (category theory) ,Optical flow ,Experimental and Cognitive Psychology ,Euclidean structure ,050105 experimental psychology ,Young Adult ,03 medical and health sciences ,Behavioral Neuroscience ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,Vision, Monocular ,Perception ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,Bootstrap model ,Mathematics ,media_common ,Depth Perception ,Monocular ,business.industry ,05 social sciences ,Perspective (graphical) ,Space Perception ,Visual Perception ,Female ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Perceived slant has often been characterized as a component of 3D shape perception for polyhedral objects. Like 3D shape, slant is often perceived inaccurately. Lind, Lee, Mazanowski, Kountouriotis, and Bingham (2014) found that 3D shape was perceived accurately with perspective changes ≥ 45°. We now similarly tested perception of 3D slant. To account for their results, Lind et al. (2014) developed a bootstrap model based on the assumption that optical information yields perception of 3D relief structure then used with large perspective changes to bootstrap to perception of 3D Euclidean structure. However, slant perception usually entails planar surfaces and structure-from-motion fails in the absence of noncoplanar points. Nevertheless, the displays in Lind et al. (2014) included stereomotion in addition to monocular optical flow. Because stereomotion is higher order, the bootstrap model might apply in the case of strictly planar surfaces. We investigated whether stereomotion, monocular structure-from-motion (SFM), or the combination of the two would yield accurate 3D slant perception with large continuous perspective change. In Experiment 1, we found that judgments of slant were inaccurate in all information conditions. In Experiment 2, we added noncoplanar structure to the surfaces. We found that judgments in the monocular SFM and combined conditions now became correct once perspective changes were ≥ 45°, replicating the results of Lind et al. (2014) and supporting the bootstrap model. In short, we found that noncoplanar structure was required to enable accurate perception of 3D slant with sufficiently large perspective changes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
- Published
- 2018
12. Searching for invariance: Geographical and optical slant
- Author
-
Olivia Cherry and Geoffrey P. Bingham
- Subjects
Surface (mathematics) ,Adult ,Male ,Target surface ,media_common.quotation_subject ,Texture (music) ,050105 experimental psychology ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,Form perception ,Orientation ,Perception ,Humans ,0501 psychology and cognitive sciences ,3d perception ,media_common ,Mathematics ,Analysis of Variance ,Vision, Binocular ,Perspective (graphical) ,05 social sciences ,Observer (special relativity) ,Invariant (physics) ,Geodesy ,Sensory Systems ,Form Perception ,Ophthalmology ,Tilt (optics) ,Line (geometry) ,Female ,Binocular vision ,030217 neurology & neurosurgery - Abstract
When we move through rigid environments, surface orientations of static objects do not appear to change. Most studies have investigated the perception of optical slant which is dependent on the perspective of the observer. We investigated the perception of geographical slant, which is invariant across different viewing perspectives, and compared it to optical slant. In Experiment 1, participants viewed a 3D triangular target surface with triangular phosphorescent texture elements presented at eye level at one of 5 slants from 0° to 90°, at 0° or 40° tilt. Participants turned around to adjust a 2D line or a 3D surface to match the slant of the target surface. In Experiment 2, the difference between optical and geographical slant was increased by changing the height of the surface to be judged. In Experiment 3, target surfaces were rotated by 50° (±25°) and viewed in both a dark and lighted room. In Experiment 1, the overall pattern of judgments exhibited only slight differences between response measures. In Experiment 2, slant judgments were slightly overestimated when the surface was at a low height and at 0° tilt. We compared optical slants of the surfaces to geographical slants. While sometimes inaccurate, participants' slant judgments remained invariant across changes in viewing perspective. In Experiment 3, judgments were the same in the dark and lighted conditions. There was no effect of target motion on judgments, although variability decreased. We conclude that participants' judgments were predicted by geographical slant, not optical slant.
- Published
- 2018
13. When kinesthetic information is neglected in learning a Novel bimanual rhythmic coordination
- Author
-
Winona Snapp-Childs, Geoffrey P. Bingham, Todd Mirich, Qin Zhu, and Shaochen Huang
- Subjects
Adult ,Male ,Linguistics and Language ,Visual perception ,Movement ,media_common.quotation_subject ,Experimental and Cognitive Psychology ,Sensory system ,Functional Laterality ,050105 experimental psychology ,Language and Linguistics ,Task (project management) ,03 medical and health sciences ,0302 clinical medicine ,Stimulus modality ,Perception ,Humans ,Learning ,0501 psychology and cognitive sciences ,Kinesthesis ,media_common ,Modality (human–computer interaction) ,05 social sciences ,Amodal perception ,Kinesthetic learning ,Middle Aged ,Sensory Systems ,Visual Perception ,Female ,Psychology ,Photic Stimulation ,Psychomotor Performance ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
Many studies have shown that rhythmic interlimb coordination involves perception of the coupled limb movements, and different sensory modalities can be used. Using visual displays to inform the coupled bimanual movement, novel bimanual coordination patterns can be learned with practice. A recent study showed that similar learning occurred without vision when a coach provided manual guidance during practice. The information provided via the two different modalities may be same (amodal) or different (modality specific). If it is different, then learning with both is a dual task, and one source of information might be used in preference to the other in performing the task when both are available. In the current study, participants learned a novel 90° bimanual coordination pattern without or with visual information in addition to kinesthesis. In posttest, all participants were tested without and with visual information in addition to kinesthesis. When tested with visual information, all participants exhibited performance that was significantly improved by practice. When tested without visual information, participants who practiced using only kinesthetic information showed improvement, but those who practiced with visual information in addition showed remarkably less improvement. The results indicate that (1) the information is not amodal, (2) use of a single type of information was preferred, and (3) the preferred information was visual. We also hypothesized that older participants might be more likely to acquire dual task performance given their greater experience of the two sensory modes in combination, but results were replicated with both 20- and 50-year-olds.
- Published
- 2017
14. Bootstrapping a better slant: A stratified process for recovering 3D metric slant
- Author
-
Xiaoye Michael Wang, Mats Lind, and Geoffrey P. Bingham
- Subjects
Linguistics and Language ,Depth Perception ,05 social sciences ,Perspective (graphical) ,Experimental and Cognitive Psychology ,Context (language use) ,050105 experimental psychology ,Sensory Systems ,Language and Linguistics ,Affine geometry ,03 medical and health sciences ,0302 clinical medicine ,Bootstrapping (electronics) ,Metric (mathematics) ,Structure from motion ,Humans ,0501 psychology and cognitive sciences ,Equidistant ,Symmetry (geometry) ,Algorithm ,030217 neurology & neurosurgery ,Mathematics - Abstract
Lind et al. (Journal of Experimental Psychology: Human Perception and Performance, 40 (1), 83, 2014) proposed a bootstrap process that used right angles on 3D relief structure, viewed over sufficiently large continuous perspective change, to recover the scaling factor for metric shape. Wang, Lind, and Bingham (Journal of Experimental Psychology: Human Perception and Performance, 44(10), 1508-1522, 2018) replicated these results in the case of 3D slant perception. However, subsequent work by the same authors (Wang et al., 2019) suggested that the original solution could be ineffective for 3D slant and presented an alternative that used two equidistant points (a portion of the original right angle). We now describe a three-step stratified process to recover 3D slant using this new solution. Starting with 2D inputs, we (1) used an existing structure-from-motion (SFM) algorithm to derive the object’s 3D relief structure and (2) applied the bootstrap process to it to recover the unknown scaling factor, which (3) was then used to produce a slant estimate. We presented simulations of results from four previous experiments (Wang et al., 2018, 2019) to compare model and human performance. We showed that the stratified process has great predictive power, reproducing a surprising number of phenomena found in human experiments. The modeling results also confirmed arguments made in Wang et al. (2019) that an axis of mirror symmetry in an object allows observers to use the recovered scaling factor to produce an accurate slant estimate. Thus, poor estimates in the context of a lack of symmetry do not mean that the scaling factor has not been recovered, but merely that the direction of slant was ambiguous.
- Published
- 2019
15. Symmetry mediates the bootstrapping of 3-D relief slant to metric slant
- Author
-
Xiaoye Michael Wang, Geoffrey P. Bingham, and Mats Lind
- Subjects
Linguistics and Language ,Depth Perception ,Rotation ,media_common.quotation_subject ,05 social sciences ,Right angle ,Experimental and Cognitive Psychology ,Observer (special relativity) ,050105 experimental psychology ,Sensory Systems ,Language and Linguistics ,Affine geometry ,03 medical and health sciences ,0302 clinical medicine ,Bootstrapping (electronics) ,Perception ,Structure from motion ,Humans ,0501 psychology and cognitive sciences ,Mirror symmetry ,Algorithm ,Scaling ,030217 neurology & neurosurgery ,Mathematics ,media_common - Abstract
Empirical studies have always shown 3-D slant and shape perception to be inaccurate as a result of relief scaling (an unknown scaling along the depth direction). Wang, Lind, and Bingham (Journal of Experimental Psychology: Human Perception and Performance, 44(10), 1508–1522, 2018) discovered that sufficient relative motion between the observer and 3-D objects in the form of continuous perspective change (≥45°) could enable accurate 3-D slant perception. They attributed this to a bootstrap process (Lind, Lee, Mazanowski, Kountouriotis, & Bingham in Journal of Experimental Psychology: Human Perception and Performance, 40(1), 83, 2014) where the perceiver identifies right angles formed by texture elements and tracks them in the 3-D relief structure through rotation to extrapolate the unknown scaling factor, then used to convert 3-D relief structure to 3-D Euclidean structure. This study examined the nature of the bootstrap process in slant perception. In a series of four experiments, we demonstrated that (1) features of 3-D relief structure, instead of 2-D texture elements, were tracked (Experiment 1); (2) identifying right angles was not necessary, and a different implementation of the bootstrap process is more suitable for 3-D slant perception (Experiment 2); and (3) mirror symmetry is necessary to produce accurate slant estimation using the bootstrapped scaling factor (Experiments 3 and 4). Together, the results support the hypothesis that a symmetry axis is used to determine the direction of slant and that 3-D relief structure is tracked over sufficiently large perspective change to produce metric depth. Altogether, the results supported the bootstrap process.
- Published
- 2019
16. A stratified process for the perception of objects: From optical transformations to 3D relief structure to 3D similarity structure to slant or aspect ratio
- Author
-
Xiaoye Michael Wang, Geoffrey P. Bingham, and Mats Lind
- Subjects
Adult ,Male ,Similarity (geometry) ,Computer science ,Machine vision ,media_common.quotation_subject ,Motion Perception ,050105 experimental psychology ,03 medical and health sciences ,Judgment ,0302 clinical medicine ,Imaging, Three-Dimensional ,Perception ,Structure from motion ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,media_common ,business.industry ,05 social sciences ,Perspective (graphical) ,Sensory Systems ,Form Perception ,Ophthalmology ,Research Design ,Sensory Thresholds ,Metric (mathematics) ,Female ,Artificial intelligence ,Symmetry (geometry) ,Visual angle ,business ,030217 neurology & neurosurgery ,Photic Stimulation - Abstract
Previously, we developed a stratified process for slant perception. First, optical transformations in structure-from-motion (SFM) and stereo were used to derive 3D relief structure (where depth scaling remains arbitrary). Second, with sufficient continuous perspective change (≥45°), a bootstrap process derived 3D similarity structure. Third, the perceived slant was derived. As predicted by theoretical work on SFM, small visual angle (5°) viewing requires non-coplanar points. Slanted surfaces with small 3D cuboids or tetrahedrons yielded accurate judgment while planar surfaces did not. Normally, object perception entails non-coplanar points. Now, we apply the stratified process to object perception where, after deriving similarity structure, alternative metric properties of the object can be derived (e.g. slant of the top surface or width-to-depth aspect ratio). First, we tested slant judgments of the smooth planar tops of three different polyhedral objects. We tested rectangular, hexagonal, and asymmetric pentagonal surfaces, finding that symmetry was required to determine the direction of slant (APP, 2019, https://doi.org/10.3758/s13414-019-01859-5). Our current results replicated the previous findings. Second, we tested judgments of aspect ratios, finding accurate performance only for symmetric objects. Results from this study suggest that, first, trackable non-coplanar points can be attained in the form of 3D objects. Second, symmetry is necessary to constrain slant and aspect ratio perception. Finally, deriving 3D similarity structure precedes estimating object properties, such as slant or aspect ratio. Together, evidence presented here supports the stratified bootstrap process for 3D object perception. STATEMENT OF SIGNIFICANCE: Planning interactions with objects in the surrounding environment entails the perception of 3D shape and slant. Studying ways through which 3D metric shape and slant can be perceived accurately by moving observers not only sheds light on how the visual system works, but also provides understanding that can be applied to other fields, like machine vision or remote sensing. The current study is a logical extension of previous studies by the same authors and explores the roles of large continuous perspective changes, relief structure, and symmetry in a stratified process for object perception.
- Published
- 2019
17. Monocular Perception of Egocentric Distance Via Head Movement Towards a Target: Verbal versus Action Measures
- Author
-
Geoffrey P. Bingham and Christopher C. Pagano
- Subjects
medicine.medical_specialty ,Physical medicine and rehabilitation ,Monocular ,Action (philosophy) ,Movement (music) ,Head (linguistics) ,Perception ,media_common.quotation_subject ,medicine ,Psychology ,media_common - Published
- 2019
18. Perception of Spatial Scale in Events from Information in Motion
- Author
-
Geoffrey P. Bingham and Daniel S. McConnell
- Subjects
Computer science ,business.industry ,Perception ,media_common.quotation_subject ,Spatial ecology ,Computer vision ,Artificial intelligence ,business ,Motion (physics) ,media_common - Published
- 2019
19. Affordances and the Ecological Approach to Throwing for Long Distances and Accuracy
- Author
-
Andrew D. Wilson, Qin Zhu, and Geoffrey P. Bingham
- Published
- 2019
20. Training compliance control yields improved drawing in 5–11year old children with motor difficulties
- Author
-
Geoffrey P. Bingham, Mark Mon-Williams, Winona Snapp-Childs, Katy A. Shire, and Liam J. B. Hill
- Subjects
Male ,Aging ,030506 rehabilitation ,medicine.medical_specialty ,education ,Biophysics ,Psychological intervention ,Experimental and Cognitive Psychology ,computer.software_genre ,Training (civil) ,Article ,Task (project management) ,03 medical and health sciences ,0302 clinical medicine ,Physical medicine and rehabilitation ,Intervention (counseling) ,medicine ,Humans ,Learning ,Orthopedics and Sports Medicine ,Child ,Motor skill ,Cross-Over Studies ,Physical Education and Training ,Multimedia ,General Medicine ,Test (assessment) ,Motor Skills Disorders ,Treatment Outcome ,Motor Skills ,Child, Preschool ,Computers, Handheld ,Female ,0305 other medical science ,Motor learning ,Transfer of learning ,Psychology ,computer ,Psychomotor Performance ,030217 neurology & neurosurgery - Abstract
There are a large number of children with motor difficulties including those that have difficulty producing movements qualitatively well enough to improve in perceptuo-motor learning without intervention. We have developed a training method that supports active movement generation to allow improvement in a 3D tracing task requiring good compliance control. Previously, we tested a limited age range of children and found that training improved performance on the 3D tracing task and that the training transferred to a 2D drawing test. In the present study, school children (5–11 years old) with motor difficulties were trained in the 3D tracing task and transfer to a 2D drawing task was tested. We used a cross-over design where half of the children received training on the 3D tracing task during the first training period and the other half of the children received training during the second training period. Given previous results, we predicted that younger children would initially show reduced performance relative to the older children, and that performance at all ages would improve with training. We also predicted that training would transfer to the 2D drawing task. However, the pre-training performance of both younger and older children was equally poor. Nevertheless, post-training performance on the 3D task was dramatically improved for both age groups and the training transferred to the 2D drawing task. Overall, this work contributes to a growing body of literature that demonstrates relatively preserved motor learning in children with motor difficulties and further demonstrates the importance of games in therapeutic interventions.
- Published
- 2016
21. Using task dynamics to quantify the affordances of throwing for long distance and accuracy
- Author
-
Geoffrey P. Bingham, Andrew Weightman, Andrew D. Wilson, and Qin Zhu
- Subjects
Adult ,Male ,Computer science ,Projectile motion ,Experimental and Cognitive Psychology ,Motor Activity ,050105 experimental psychology ,Young Adult ,03 medical and health sciences ,Behavioral Neuroscience ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,Motion perception ,Affordance ,Communication ,business.industry ,05 social sciences ,Athletes ,Motor Skills ,Space Perception ,Ball (bearing) ,Female ,Tennis ball ,Artificial intelligence ,business ,Row ,030217 neurology & neurosurgery ,Throwing ,Subspace topology - Abstract
In 2 experiments, the current study explored how affordances structure throwing for long distance and accuracy. In Experiment 1, 10 expert throwers (from baseball, softball, and cricket) threw regulation tennis balls to hit a vertically oriented 4 ft × 4 ft target placed at each of 9 locations (3 distances × 3 heights). We measured their release parameters (angle, speed, and height) and showed that they scaled their throws in response to changes in the target's location. We then simulated the projectile motion of the ball and identified a continuous subspace of release parameters that produce hits to each target location. Each subspace describes the affordance of our target to be hit by a tennis ball moving in a projectile motion to the relevant location. The simulated affordance spaces showed how the release parameter combinations required for hits changed with changes in the target location. The experts tracked these changes in their performance and were successful in hitting the targets. We next tested unusual (horizontal) targets that generated correspondingly different affordance subspaces to determine whether the experts would track the affordance to generate successful hits. Do the experts perceive the affordance? They do. In Experiment 2, 5 cricketers threw to hit either vertically or horizontally oriented targets and successfully hit both, exhibiting release parameters located within the requisite affordance subspaces. We advocate a task dynamical approach to the study of affordances as properties of objects and events in the context of tasks as the future of research in this area. (PsycINFO Database Record
- Published
- 2016
22. Perception of the Similarity Structure of Objects: A Stratified Model
- Author
-
Mats Lind, Geoffrey P. Bingham, and Xiaoye Wang
- Subjects
Ophthalmology ,Similarity (network science) ,business.industry ,Perception ,media_common.quotation_subject ,Structure (category theory) ,Pattern recognition ,Artificial intelligence ,business ,Sensory Systems ,Mathematics ,media_common - Published
- 2020
23. Exploring disturbance as a force for good in motor learning
- Author
-
Jack Brookes, Earle Jamieson, Faisal Mushtaq, Aaron J. Fath, Peter Culmer, Richard M. Wilkie, Mark Mon-Williams, and Geoffrey P. Bingham
- Subjects
Male ,Computer science ,Entropy ,Social Sciences ,Inference ,Stiffness ,Training (Education) ,Learning and Memory ,0302 clinical medicine ,Sociology ,Adaptive Training ,Psychology ,Free Energy ,media_common ,Haptic technology ,0303 health sciences ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Physics ,Robotics ,Middle Aged ,Adaptation, Physiological ,Surprise ,Physical Sciences ,Thermodynamics ,Engineering and Technology ,Medicine ,Female ,Motor learning ,Robots ,Algorithms ,Research Article ,Adult ,Disturbance (geology) ,Movement ,Science ,media_common.quotation_subject ,Materials Science ,Material Properties ,Workspace ,Research and Analysis Methods ,Education ,Human Learning ,Young Adult ,03 medical and health sciences ,Control theory ,Mechanical Properties ,Learning ,Humans ,030304 developmental biology ,Free energy principle ,Force field (physics) ,Mechanical Engineering ,Counterintuitive ,Cognitive Psychology ,Biology and Life Sciences ,Cognitive Science ,Robot ,Mathematics ,Psychomotor Performance ,030217 neurology & neurosurgery ,Neuroscience - Abstract
Disturbance forces facilitate motor learning, but theoretical explanations for this counterintuitive phenomenon are lacking. Smooth arm movements require predictions (inference) about the force-field associated with a workspace. The Free Energy Principle (FEP) suggests that such ‘active inference’ is driven by ‘surprise’. We used these insights to create a formal model that explains why disturbance helps learning. In two experiments, participants undertook a continuous tracking task where they learned how to move their arm in different directions through a novel 3D force field. We compared baseline performance before and after exposure to the novel field to quantify learning. In Experiment 1, the exposure phases (but not the baseline measures) were delivered under three different conditions: (i) robot haptic assistance; (ii) no guidance; (iii) robot haptic disturbance. The disturbance group showed the best learning as our model predicted. Experiment 2 further tested our FEP inspired model. Assistive and/or disturbance forces were applied as a function of performance (low surprise), and compared to a random error manipulation (high surprise). The random group showed the most improvement as predicted by the model. Thus, motor learning can be conceptualised as a process of entropy reduction. Short term motor strategies (e.g. global impedance) can mitigate unexpected perturbations, but continuous movements require active inference about external force-fields in order to create accurate internal models of the external world (motor learning). Our findings reconcile research on the relationship between noise, variability, and motor learning, and show that information is the currency of motor learning.
- Published
- 2020
24. Change in effectivity yields recalibration of affordance geometry to preserve functional dynamics
- Author
-
Geoffrey P. Bingham and Xiaoye Michael Wang
- Subjects
Adult ,Male ,Current (mathematics) ,General Neuroscience ,05 social sciences ,Mathematical analysis ,Function (mathematics) ,Invariant (physics) ,Motor Activity ,Stability (probability) ,050105 experimental psychology ,Biomechanical Phenomena ,03 medical and health sciences ,0302 clinical medicine ,Body Size ,Humans ,0501 psychology and cognitive sciences ,Percept ,Constant (mathematics) ,Affordance ,Scaling ,030217 neurology & neurosurgery ,Psychomotor Performance ,Size Perception ,Mathematics - Abstract
Mon-Williams and Bingham (Exp Brain Res 211(1):145–160, 2011) developed a geometrical affordance model for reaches-to-grasp, and identified a constant scaling relationship, P, between safety margins (SM) and available apertures (SM) that are determined by the sizes of the objects and the individual hands. Bingham et al. (J Exp Psychol Hum Percept Perform 40(4):1542–1550, 2014) extended the model by introducing a dynamical component that scales the geometrical relationship to the stability of the reaching-to-grasp. The goal of the current study was to explore whether and how quickly change in the relevant effectivity (functionally determined hand size = maximum grip) would affect the geometrical and dynamical scaling relationships. The maximum grip of large-handed males was progressively restricted. Participants responded to this restriction by using progressively smaller safety margins, but progressively larger P (= SM/AA) values that preserved an invariant dynamical scaling relationship. The recalibration was relatively fast, occurring over five trials or less, presumably a number required to detect the variability or stability of performance. The results supported the affordance model for reaches-to-grasp in which the invariance is determined by the dynamical component, because it serves the goal of not colliding with the object before successful grasping can be achieved. The findings were also consistent with those of Snapp-Childs and Bingham (Exp Brain Res 198(4):527–533, 2009) who found changes in age-specific geometric scaling for stepping affordances as a function of changes in effectivities over the life span where those changes preserved a dynamic scaling constant similar to that in the current study.
- Published
- 2018
25. 'Center of Mass Perception': Affordances as Dispositions Determined by Dynamics
- Author
-
Michael M. Muchisky and Geoffrey P. Bingham
- Subjects
Dynamics (music) ,Perception ,media_common.quotation_subject ,Affordance ,Psychology ,Center of mass (relativistic) ,media_common ,Cognitive psychology - Published
- 2018
26. Perception of time to contact of slow- and fast-moving objects using monocular and binocular motion information
- Author
-
Geoffrey P. Bingham, Mats Lind, and Aaron J. Fath
- Subjects
Adult ,Male ,Linguistics and Language ,Computer science ,media_common.quotation_subject ,Motion Perception ,Experimental and Cognitive Psychology ,050105 experimental psychology ,Language and Linguistics ,Motion (physics) ,03 medical and health sciences ,0302 clinical medicine ,Vision, Monocular ,Perception ,Contrast (vision) ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,Motion perception ,media_common ,Vision, Binocular ,Monocular ,business.industry ,05 social sciences ,Time perception ,Sensory Systems ,Time Perception ,Binocular disparity ,Female ,Artificial intelligence ,business ,Binocular vision ,030217 neurology & neurosurgery - Abstract
The role of the monocular-flow-based optical variable τ in the perception of the time to contact of approaching objects has been well-studied. There are additional contributions from binocular sources of information, such as changes in disparity over time (CDOT), but these are less understood. We conducted an experiment to determine whether an object's velocity affects which source is most effective for perceiving time to contact. We presented participants with stimuli that simulated two approaching squares. During approach the squares disappeared, and participants indicated which square would have contacted them first. Approach was specified by (a) only disparity-based information, (b) only monocular flow, or (c) all sources of information in normal viewing conditions. As expected, participants were more accurate at judging fast objects when only monocular flow was available than when only CDOT was. In contrast, participants were more accurate judging slow objects with only CDOT than with only monocular flow. For both ranges of velocity, the condition with both information sources yielded performance equivalent to the better of the single-source conditions. These results show that different sources of motion information are used to perceive time to contact and play different roles in allowing for stable perception across a variety of conditions.
- Published
- 2018
27. Information about relative phase in bimanual coordination is modality specific (not amodal), but kinesthesis and vision can teach one another
- Author
-
Geoffrey P. Bingham, Qin Zhu, and Winona Snapp-Childs
- Subjects
Adult ,Male ,Adolescent ,Transfer, Psychology ,Biophysics ,Experimental and Cognitive Psychology ,050105 experimental psychology ,Functional Laterality ,Task (project management) ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Stimulus modality ,Humans ,Learning ,0501 psychology and cognitive sciences ,Orthopedics and Sports Medicine ,Kinesthesis ,Vision, Ocular ,Modality (human–computer interaction) ,Proprioception ,05 social sciences ,Amodal perception ,General Medicine ,Action (philosophy) ,Female ,Relative phase ,Psychology ,030217 neurology & neurosurgery ,Psychomotor Performance ,Cognitive psychology - Abstract
How is information from different sensory modalities coordinated when learning an action? We tested two hypotheses. The first is that the information is amodal. The second is that the information is modality specific and one modality is used in first learning the action and then is used to teach the other modality. We investigated these hypotheses using a rhythmic coordination task. One group of participants learned to perform bimanual coordination at a relative phase of 90° using kinesthesis. A second group used vision to learn unimanual 90° coordination. After training, performance using the alternate modality was tested in each case. Snapp-Childs, Wilson, and Bingham (2015) had found transfer of 50% of learned performance of 90° coordination between the unimanual and bimanual tasks when each had included use of vision. Now, we found essentially no transfer (≈5%) indicating that the information was modality specific. Next, post-training trials performed using the untrained modality were alternated with trials in which kinesthesis and vision were used. The result was that performance using the untrained modality progressively improved. We concluded that trained modality was used to teach the untrained modality and that this likely represents the way information from different sensory modalities is coordinated in performance of actions.
- Published
- 2018
28. Binocular Perception of 2D Lateral Motion and Guidance of Coordinated Motor Behavior
- Author
-
Winona Snapp-Childs, Aaron J. Fath, Geoffrey P. Bingham, and Georgios K. Kountouriotis
- Subjects
Adult ,Male ,media_common.quotation_subject ,Lag ,Motion Perception ,Experimental and Cognitive Psychology ,Motor Activity ,Stimulus (physiology) ,050105 experimental psychology ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,Joystick ,Perception ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,Motion perception ,media_common ,Vision, Binocular ,Communication ,Monocular ,business.industry ,05 social sciences ,Sensory Systems ,Ophthalmology ,Virtual image ,Female ,Artificial intelligence ,business ,Psychology ,Binocular vision ,Psychomotor Performance ,030217 neurology & neurosurgery - Abstract
Zannoli, Cass, Alais, and Mamassian (2012) found greater audiovisual lag between a tone and disparity-defined stimuli moving laterally (90–170 ms) than for disparity-defined stimuli moving in depth or luminance-defined stimuli moving laterally or in depth (50–60 ms). We tested if this increased lag presents an impediment to visually guided coordination with laterally moving objects. Participants used a joystick to move a virtual object in several constant relative phases with a laterally oscillating stimulus. Both the participant-controlled object and the target object were presented using a disparity-defined display that yielded information through changes in disparity over time (CDOT) or using a luminance-defined display that additionally provided information through monocular motion and interocular velocity differences (IOVD). Performance was comparable for both disparity-defined and luminance-defined displays in all relative phases. This suggests that, despite lag, perception of lateral motion through CDOT is generally sufficient to guide coordinated motor behavior.
- Published
- 2015
29. Training children aged 5-10 years in manual compliance control to improve drawing and handwriting
- Author
-
Winona Snapp-Childs and Geoffrey P. Bingham
- Subjects
Male ,medicine.medical_specialty ,Handwriting ,Movement ,Transfer, Psychology ,education ,Control (management) ,Biophysics ,Experimental and Cognitive Psychology ,Compliance (psychology) ,Task (project management) ,03 medical and health sciences ,Magnetics ,0302 clinical medicine ,Physical medicine and rehabilitation ,Child Development ,medicine ,Humans ,Orthopedics and Sports Medicine ,Prospective Studies ,Child ,Training (meteorology) ,030229 sport sciences ,General Medicine ,Motor Skills Disorders ,Improved performance ,Transfer of training ,Motor Skills ,Child, Preschool ,Female ,Stylus ,Psychology ,030217 neurology & neurosurgery ,Psychomotor Performance - Abstract
A large proportion of school-aged children exhibit poor drawing and handwriting. This prevalence limits the availability of therapy. We developed an automated method for training improved manual compliance control and relatedly, prospective control of a stylus. The approach included a difficult training task, while providing parametrically modifiable support that enables the children to perform successfully while developing good compliance control. The task was to use a stylus to push a bead along a 3D wire path. Support was provided by making the wire magnetically attractive to the stylus. Support was progressively reduced as 3D tracing performance improved. We report studies that (1) compared performance of Typically Developing (TD) children and children with Developmental Coordination Disorder (DCD), (2) tested training with active versus passive movement, (3) tested progressively reduced versus constant or no support during training, (4) tested children of different ages, (5) tested the transfer of training to a drawing task, (6) tested the specificity of training in respect to the size, shape and dimensionality of figures, and (7) investigated the relevance of the training task to the Beery VMI, an inventory used to diagnose DCD. The findings were as follows. (1) Pre-training performance of TD and DCD children was the same and good with high support but distinct and poor with low support. Support yielded good self-efficacy that motivated training. Post training performance with no support was improved and the same for TD and DCD children. (2) Actively controlled movements were required for improved performance. (3) Progressively reduced support was required for good performance during and after training. (4) Age differences in performance during pre-training were eliminated post-training. (5) Improvements transferred to drawing. (6) There was no evidence of specificity of training in transfer. (7) Disparate Beery scores were reflected in pre-training but not post-training performance. We conclude that the method improves manual compliance control, and more generally, prospective control of movements used in drawing performance.
- Published
- 2017
30. Breaking camouflage and detecting targets require optic flow and image structure information
- Author
-
Ned Bingham, Chang Chen, Jing Samantha Pan, and Geoffrey P. Bingham
- Subjects
Visual perception ,Computer science ,Machine vision ,business.industry ,05 social sciences ,Optical flow ,Eye movement ,Target acquisition ,050105 experimental psychology ,Atomic and Molecular Physics, and Optics ,03 medical and health sciences ,0302 clinical medicine ,Optics ,Camouflage ,0501 psychology and cognitive sciences ,Electrical and Electronic Engineering ,business ,Engineering (miscellaneous) ,Image resolution ,030217 neurology & neurosurgery ,Structured light - Abstract
Use of motion to break camouflage extends back to the Cambrian [In the Blink of an Eye: How Vision Sparked the Big Bang of Evolution (New York Basic Books, 2003)]. We investigated the ability to break camouflage and continue to see camouflaged targets after motion stops. This is crucial for the survival of hunting predators. With camouflage, visual targets and distracters cannot be distinguished using only static image structure (i.e., appearance). Motion generates another source of optical information, optic flow, which breaks camouflage and specifies target locations. Optic flow calibrates image structure with respect to spatial relations among targets and distracters, and calibrated image structure makes previously camouflaged targets perceptible in a temporally stable fashion after motion stops. We investigated this proposal using laboratory experiments and compared how many camouflaged targets were identified either with optic flow information alone or with combined optic flow and image structure information. Our results show that the combination of motion-generated optic flow and target-projected image structure information yielded efficient and stable perception of camouflaged targets.
- Published
- 2017
31. An evolutionary robotics model of visually-guided braking: Testing optical variables
- Author
-
Geoffrey P. Bingham, Didem Kadihasanoglu, Randall D. Beer, TOBB ETU, Faculty of Science and Literature, Department of Psychology, TOBB ETÜ, Fen Edebiyat Fakültesi, Psikoloji Bölümü, and Kadıhasanoğlu, Didem
- Subjects
Basis (linear algebra) ,Dynamical systems theory ,business.industry ,ComputerSystemsOrganization_MISCELLANEOUS ,Visually guided ,Work (physics) ,Evolutionary robotics ,Artificial intelligence ,business ,Object (computer science) ,Task (project management) ,Image (mathematics) - Abstract
This paper presents results from a series of evolutionary robotics simulations that were designed to investigate the informational basis of visually-guided braking. Evolutionary robotics techniques were used to develop models of visually-guided braking behavior in humans to aid in resolving existing questions in the literature. Based on a well-used experimental paradigm from psychology, model agents were evolved to solve a driving-like braking task in a simple 2D environment involving one object. Agents had five sensors to detect image size of the object, image expansion rate, tau, tau-dot and proportional rate, respectively. These optical variables were those tested in experimental investigations of visually-guided braking in humans. The aim of the present work was to investigate which of these optical variables were used by the evolved agents to solve the braking task when all variables were available to control braking. Our results indicated that the agent with the highest performance used exclusively proportional rate to control braking. The agent with the lowest performance was found to be using primarily tau-dot together with image size and image expansion rate.
- Published
- 2017
32. Embodied memory allows accurate and stable perception of hidden objects despite orientation change
- Author
-
Geoffrey P. Bingham, Ned Bingham, and Jing Samantha Pan
- Subjects
Adult ,Male ,Adolescent ,media_common.quotation_subject ,Stability (learning theory) ,Motion Perception ,Experimental and Cognitive Psychology ,050105 experimental psychology ,Motion (physics) ,03 medical and health sciences ,Behavioral Neuroscience ,Young Adult ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,Memory ,Perception ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,media_common ,Communication ,business.industry ,Orientation (computer vision) ,05 social sciences ,Process (computing) ,Identification (information) ,Pattern Recognition, Visual ,Embodied cognition ,Space Perception ,Pattern recognition (psychology) ,Female ,Artificial intelligence ,Psychology ,business ,030217 neurology & neurosurgery ,Psychomotor Performance - Abstract
Rotating a scene in a frontoparallel plane (rolling) yields a change in orientation of constituent images. When using only information provided by static images to perceive a scene after orientation change, identification performance typically decreases (Rock & Heimer, 1957). However, rolling generates optic flow information that relates the discrete, static images (before and after the change) and forms an embodied memory that aids recognition. The embodied memory hypothesis predicts that upon detecting a continuous spatial transformation of image structure, or in other words, seeing the continuous rolling process and objects undergoing rolling observers should accurately perceive objects during and after motion. Thus, in this case, orientation change should not affect performance. We tested this hypothesis in three experiments and found that (a) using combined optic flow and image structure, participants identified locations of previously perceived but currently occluded targets with great accuracy and stability (Experiment 1); (b) using combined optic flow and image structure information, participants identified hidden targets equally well with or without 30° orientation changes (Experiment 2); and (c) when the rolling was unseen, identification of hidden targets after orientation change became worse (Experiment 3). Furthermore, when rolling was unseen, although target identification was better when participants were told about the orientation change than when they were not told, performance was still worse than when there was no orientation change. Therefore, combined optic flow and image structure information, not mere knowledge about the rolling, enables accurate and stable perception despite orientation change. (PsycINFO Database Record
- Published
- 2017
33. Seeing Where the Stone Is Thrown by Observing a Point-Light Thrower: Perceiving the Effect of Action Is Enabled by Information, Not Motor Experience
- Author
-
Geoffrey P. Bingham and Qin Zhu
- Subjects
Point light ,Communication ,General Computer Science ,Social Psychology ,business.industry ,media_common.quotation_subject ,Experimental and Cognitive Psychology ,Biological motion perception ,Perception ,Big game ,Visual experience ,Psychology ,business ,Ecology, Evolution, Behavior and Systematics ,Throwing ,media_common ,Biological motion ,Cognitive psychology - Abstract
People are very adept at perceiving biological motion (e.g., Johansson, 1973). This ability has been an essential life skill to members of this social species. The human niche during the ice age was socially coordinated hunting for big game. Being able to judge the location targeted by the throw of a conspecific would be a valuable perceptual ability that we now study to investigate 2 competing theories of biological motion perception: Common Coding (CC; Prinz, 1997) and Kinematic Specification of Dynamics (KSD; Runeson & Frykholm, 1983). The 2 theories diverge in attributing perceptual ability to either motor or visual experience, respectively. To test predictions of the CC theory, we performed 3 experiments to manipulate observers' specific motor experience while they judged the targeted location of throwing by watching point-light displays. In Experiment 1, we tested whether the identity of the thrower in the display mattered. In Experiment 2, we tested whether the motor expertise of the observer matte...
- Published
- 2014
34. Information and control strategy to solve the degrees-of-freedom problem for nested locomotion-to-reach
- Author
-
Aaron J. Fath, Brian S. Marks, Geoffrey P. Bingham, and Winona Snapp-Childs
- Subjects
Adult ,Male ,Movement ,General Neuroscience ,Control (management) ,Rate control ,Middle Aged ,Motor Activity ,Hand ,Biomechanical Phenomena ,Degrees of freedom problem ,Task (project management) ,Moment (mathematics) ,Young Adult ,Action (philosophy) ,Control theory ,Humans ,Binocular disparity ,Female ,Psychology ,Locomotion ,Psychomotor Performance - Abstract
Locomoting-to-reach to a target is a common visuomotor approach behavior that consists of two nested component actions: locomotion and reaching. The information and control strategies that guide locomotion and reaching in isolation are well studied, but their interaction during locomoting-to-reach behavior has received little attention. We investigated the role of proportional rate control in unifying these components into one action. Individuals use this control strategy with hand-centric disparity-based τ information to guide seated reaching (Anderson and Bingham in Exp Brain Res 205:291–306. doi: 10.1007/s00221-010-2361-9 , 2010) and use it with sequential information to perform targeted locomotion to bring an outstretched arm and hand to a target; first with eye-centric τ information and then hand-centric τ information near the target (Anderson and Bingham in Exp Brain Res 214:631–644. doi: 10.1007/s00221-011-2865-y , 2011). In the current study, participants performed two tasks: locomoting to bring a rigidly outstretched arm and hand to a target (handout), and locomoting to initiate and guide a reach to a target (locomoting-to-reach). Movement trajectories were analyzed. Results show that participants used proportional rate control throughout both tasks, in the sequential manner that was found by Anderson and Bingham (Exp Brain Res 214:631–644. doi: 10.1007/s00221-011-2865-y , 2011). Individual differences were found in the moment at which this information switch occurred in the locomoting-to-reach task. Some participants appeared to switch to proportional rate control with hand-τ once the hand came into view and others switched once the reaching component was complete and the arm was fully outstretched. In the locomoting-to-reach task, participants consistently initiated reaches when eye-τ specified a time-to-contact of 1.0 s. Proportional rate control provides a solution to the degrees-of-freedom problem in the classic manner, by making multiple things one.
- Published
- 2014
35. Calibration is both functional and anatomical
- Author
-
Geoffrey P. Bingham, Mark Mon-Williams, and Jing Samantha Pan
- Subjects
Adult ,Male ,Computer science ,Feedback, Psychological ,Experimental and Cognitive Psychology ,Adaptation (eye) ,Article ,Behavioral Neuroscience ,Arts and Humanities (miscellaneous) ,Perceptual-motor processes ,Calibration ,Humans ,Computer vision ,Haptic technology ,Communication ,business.industry ,Feed forward ,Space perception ,Hand ,Spatial perception ,Adaptation, Physiological ,Touch Perception ,Action (philosophy) ,Space Perception ,Female ,Artificial intelligence ,business - Abstract
Bingham and Pagano (1998) described calibration as a mapping from embodied perceptual units to an embodied action unit and suggested that it is an inherent component of perception/action that yields accurate targeted actions. We tested two predictions of this "Mapping Theory." First, calibration should transfer between limbs, because it involves a mapping from perceptual units to an action unit, and thus is functionally specific to the action (Pan, Coats, and Bingham, 2014). We used distorted haptic feedback to calibrate feedforward right hand reaches and tested right and left hand reaches after calibration. The calibration transferred. Second, the Mapping Theory predicts that limb specific calibration should be possible because the units are embodied and anatomy contributes to their scaling. Limbs must be calibrated to one another given potential anatomical differences among limbs. We used distorted haptic feedback to calibrate feedforward reaches with right and left arms simultaneously in opposite directions relative to a visually specified target. Reaches tested after calibration revealed reliable limb specific calibration. Both predictions were confirmed. This resolves a prevailing controversy as to whether calibration is functional (Bruggeman & Warren, 2010; Rieser, Pick, Ashmead, & Garing, 1995) or anatomical (Durgin et al., 2003; Durgin & Pelah, 1999). Necessarily, it is both.
- Published
- 2014
36. The dynamics of sensorimotor calibration in reaching-to-grasp movements
- Author
-
Geoffrey P. Bingham and Mark Mon-Williams
- Subjects
Male ,Communication ,Hand Strength ,Physiology ,Calibration (statistics) ,business.industry ,General Neuroscience ,GRASP ,Process (computing) ,Adaptation (eye) ,Articles ,Sensory Gating ,Adaptation, Physiological ,Weighting ,Noise ,Feature (computer vision) ,Humans ,Female ,Computer vision ,Artificial intelligence ,business ,Psychology ,Psychomotor Performance ,Haptic technology - Abstract
Reach-to-grasp movements require information about the distance and size of target objects. Calibration of this information could be achieved via feedback information (visual and/or haptic) regarding terminal accuracy when target objects are grasped. A number of reports suggest that the nervous system alters reach-to-grasp behavior following either a visual or haptic error signal indicating inaccurate reaching. Nevertheless, the reported modification is generally partial (reaching is changed less than predicted by the feedback error), a finding that has been ascribed to slow adaptation rates. It is possible, however, that the modified reaching reflects the system's weighting of the visual and haptic information in the presence of noise rather than calibration per se. We modeled the dynamics of calibration and showed that the discrepancy between reaching behavior and the feedback error results from an incomplete calibration process. Our results provide evidence for calibration being an intrinsic feature of reach-to-grasp behavior.
- Published
- 2013
37. Perception of relative throw-ability
- Author
-
Todd Mirich, Qin Zhu, and Geoffrey P. Bingham
- Subjects
Adult ,Male ,Property (programming) ,media_common.quotation_subject ,Statistics, Nonparametric ,Task (project management) ,Judgment ,Young Adult ,Reference Values ,Perception ,Reaction Time ,Hum ,Humans ,Weight Perception ,Affordance ,media_common ,Analysis of Variance ,Hand Strength ,General Neuroscience ,Female ,Percept ,Psychology ,Social psychology ,Psychomotor Performance ,Throwing ,Cognitive psychology - Abstract
Bingham et al. (J Exp Psychol Hum Percept Perform 15(3):507-528, 1989) showed that skilled throwers can perceive optimal objects for throwing to a maximum distance. Zhu and Bingham (J Exp Psychol Hum Percept Perform 34(4):929, 2008, 36(4):862-875, 2010) replicated this finding and then showed that felt heaviness is used to perceive this affordance (see also Zhu and Bingham in Evol Hum Behav 32(4):288-293, 2011; Zhu et al. in Exp Brain Res 224(2):221-231, 2013). Throwers pick the best weight for spherical projectiles in each graspable size. Bingham et al. (J Exp Psychol Hum Percept Perform 15(3):507-528, 1989) speculated that relative throw-ability might be perceptible. This would mean that the ordering of distances achieved by maximum effort throws of different objects could be judged. This affordance property is not the same as optimal throw-ability, because it requires all projectiles to be evaluated relative to one another with respect to ordinally scaled distances, not just a discrete optimum. We now used a magnitude estimation task to test this hypothesis, comparing the resultant ordering with that exhibited by distances of throws in previous studies. The findings show that participants were able to perform the perceptual task. However, discrimination among objects of different weight within a size was better than between sizes. The implications of these results for understanding of the information used to perform this task are discussed.
- Published
- 2013
38. Embodied memory: Effective and stable perception by combining optic flow and image structure
- Author
-
Jing Samantha Pan, Geoffrey P. Bingham, and Ned Bingham
- Subjects
Adult ,Male ,Visual perception ,media_common.quotation_subject ,Short-term memory ,Experimental and Cognitive Psychology ,Context (language use) ,Optic Flow ,Young Adult ,Behavioral Neuroscience ,Arts and Humanities (miscellaneous) ,Memory ,Perception ,Humans ,Contrast (vision) ,Computer vision ,media_common ,Communication ,business.industry ,Memory, Short-Term ,Flow (mathematics) ,Embodied cognition ,Mental Recall ,Visual Perception ,Female ,Artificial intelligence ,Focus (optics) ,business ,Psychology ,Psychomotor Performance - Abstract
Visual perception studies typically focus either on optic flow structure or image structure, but not on the combination and interaction of these two sources of information. Each offers unique strengths in contrast to the other's weaknesses. Optic flow yields intrinsically powerful information about 3D structure, but is ephemeral. It ceases when motion stops. Image structure is less powerful in specifying 3D structure, but is stable. It remains when motion stops. Optic flow and image structure are intrinsically related in vision because the optic flow carries one image to the next. This relation is especially important in the context of progressive occlusion, in which optic flow provides information about the location of targets hidden in subsequent image structure. In four experiments, we investigated the role of image structure in "embodied memory" in contrast to memory that is only in the head. We found that either optic flow (Experiment 1) or image structure (Experiment 2) alone were relatively ineffective, whereas the combination was effective and, in contrast to conditions requiring reliance on memory-in-the-head, much more stable over extended time (Experiments 2 through 4). Limits well documented for visual short memory (that is, memory-in-the-head) were strongly exceeded by embodied memory. The findings support J. J. Gibson's (1979/1986, The Ecological Approach to Visual Perception, Boston, MA, Houghton Mifflin) insights about progressive occlusion and the embodied nature of perception and memory.
- Published
- 2013
39. Perceptuo-motor learning rate declines by half from 20s to 70/80s
- Author
-
Rachel O. Coats, Geoffrey P. Bingham, Andrew D. Wilson, and Winona Snapp-Childs
- Subjects
Male ,Aging ,media_common.quotation_subject ,Developmental psychology ,Task (project management) ,Young Adult ,Rhythm ,Age groups ,Surveys and Questionnaires ,Perception ,Humans ,Learning ,Motion perception ,Least-Squares Analysis ,Young adult ,Aged ,media_common ,Aged, 80 and over ,Analysis of Variance ,General Neuroscience ,Retention, Psychology ,Motor Skills ,Younger adults ,Data Interpretation, Statistical ,Linear Models ,Female ,Psychology ,Motor learning ,Psychomotor Performance - Abstract
This study examined perception-action learning in younger adults in their 20s compared to older adults in their 70s and 80s. The goal was to provide, for the first time, quantitative estimates of perceptuo-motor learning rates for each age group and to reveal how these learning rates change between these age groups. We used a visual coordination task in which participants are asked to learn to produce a novel-coordinated rhythmic movement. The task has been studied extensively in young adults, and the characteristics of the task are well understood. All groups showed improvement, although learning rates for those in their 70s and 80s were half the rate for those in their 20s. We consider the potential causes of these differences in learning rates by examining performance across the different coordination patterns examined as well as recent results that reveal age-related deficits in motion perception.
- Published
- 2012
40. A Dynamical Analysis of the Suitability of Prehistoric Spheroids from the Cave of Hearths as Thrown Projectiles
- Author
-
Andrew D. Wilson, Lawrence Barham, Geoffrey P. Bingham, Ian G. Stanistreet, and Qin Zhu
- Subjects
geography ,Multidisciplinary ,geography.geographical_feature_category ,Hearth ,Projectile ,05 social sciences ,Archaeology ,Article ,050105 experimental psychology ,Stone Age ,Prehistory ,03 medical and health sciences ,0302 clinical medicine ,Cave ,0501 psychology and cognitive sciences ,Middle Stone Age ,030217 neurology & neurosurgery ,Acheulean ,Throwing ,Geology - Abstract
Spheroids are ball-shaped stone objects found in African archaeological sites dating from 1.8 million years ago (Early Stone Age) to at least 70,000 years ago (Middle Stone Age). Spheroids are either fabricated or naturally shaped stones selected and transported to places of use making them one of the longest-used technologies on record. Most hypotheses about their use suggest they were percussive tools for shaping or grinding other materials. However, their size and spherical shape make them potentially useful as projectile weapons, a property that, uniquely, humans have been specialised to exploit for millions of years. Here we show (using simulations of projectile motions resulting from human throwing) that 81% of a sample of spheroids from the late Acheulean (Bed 3) at the Cave of Hearths, South Africa afford being thrown so as to inflict worthwhile damage to a medium-sized animal over distances up to 25 m. Most of the objects have weights that produce optimal levels of damage from throwing, rather than simply being as heavy as possible (as would suit other functions). Our results show that these objects were eminently suitable for throwing, and demonstrate how empirical research on behavioural tasks can inform and constrain our theories about prehistoric artefacts.
- Published
- 2016
41. Perceived 3D metric (or Euclidean) shape is merely ambiguous, not systematically distorted
- Author
-
Geoffrey P. Bingham, Young Lim Lee, and Mats Lind
- Subjects
Adult ,Male ,Depth Perception ,General Neuroscience ,media_common.quotation_subject ,Context (language use) ,Ambiguity ,Ellipse ,Form Perception ,Judgment ,Pattern Recognition, Visual ,Form perception ,Statistics ,Metric (mathematics) ,Psychophysics ,Humans ,Female ,Percept ,Depth perception ,Photic Stimulation ,Mathematics ,media_common - Abstract
Many studies have reported that perceived shape is systematically distorted, but Lind et al. (Inf Vis 2:51–57, 2003) and Todd and Norman (Percept Psychophys 65:31–47, 2003) both found that distortions varied with tasks and observers. We now investigated the hypothesis that perception of 3D metric (or Euclidean) shape is ambiguous rather than systematically distorted by testing whether variations in context would systematically alter apparent distortions. The task was to adjust the aspect ratio of an ellipse on a computer screen to match the cross-section of a target elliptical cylinder object viewed in either frontoparallel elliptical cross-section (2D) or elliptical cross-section in depth (3D). Three different groups were tested using two tasks and two different ranges of aspect ratio: Group 1) 2D(Small) → 3D(Large), Group 2) 2D(Large) → 3D(Small), Group 3a) 2D(Small) → 3D(Small), and Group 3b) 2D(Large) → 3D(Large). Observers performed the 2D task accurately. This provided the context. The results showed the expected order of slopes when judged aspect ratios were regressed on actual aspect ratios: Group 1 (SL)
- Published
- 2012
42. Object recognition using metric shape
- Author
-
Geoffrey P. Bingham, Ned Bingham, Mats Lind, and Young-Lim Lee
- Subjects
Adult ,Male ,Adolescent ,Computer science ,Context (language use) ,Young Adult ,Active shape model ,Humans ,Structure from motion ,Shape perception ,Structure-from-motion ,Analysis of Variance ,Communication ,business.industry ,Perspective (graphical) ,Cognitive neuroscience of visual object recognition ,Recognition, Psychology ,Pattern recognition ,Object recognition ,Middle Aged ,Stereo vision ,Sensory Systems ,Form Perception ,Two Visual System theory ,Ophthalmology ,Stereopsis ,Line (geometry) ,Metric (mathematics) ,Female ,Artificial intelligence ,business ,Photic Stimulation - Abstract
Most previous studies of 3D shape perception have shown a general inability to visually perceive metric shape. In line with this, studies of object recognition have shown that only qualitative differences, not quantitative or metric ones can be used effectively for object recognition. Recently, Bingham and Lind (2008) found that large perspective changes (⩾45°) allow perception of metric shape and Lee and Bingham (2010) found that this, in turn, allowed accurate feedforward reaches-to-grasp objects varying in metric shape. We now investigated whether this information would allow accurate and effective recognition of objects that vary in respect to metric shape. Both judgment accuracies (d′) and reaction times confirmed that, with the availability of visual information in large perspective changes, recognition of objects using quantitative as compared to qualitative properties was equivalent in accuracy and speed of judgments. The ability to recognize objects based on their metric shape is, therefore, a function of the availability or unavailability of requisite visual information. These issues and results are discussed in the context of the Two Visual System hypothesis of Milner and Goodale (1995, 2006).
- Published
- 2012
43. The stability of rhythmic movement coordination depends on relative speed: the Bingham model supported
- Author
-
Winona Snapp-Childs, Geoffrey P. Bingham, and Andrew D. Wilson
- Subjects
Adult ,Periodicity ,Time Factors ,Movement ,Models, Neurological ,Motion Perception ,Stability (probability) ,Functional Laterality ,Motion (physics) ,Task (project management) ,Relative direction ,Young Adult ,Reaction Time ,Humans ,Harmonic oscillator ,Communication ,business.industry ,General Neuroscience ,Mathematical analysis ,Relative velocity ,Middle Aged ,Amplitude ,Percept ,Psychology ,business ,Psychomotor Performance - Abstract
Following many studies showing that the coupling in bimanual coordination can be perceptual, Bingham (Ecol Psychol in 16:45-53, 2001; 2004a, b) proposed a dynamical model of such movements. The model contains three key hypotheses: (1) Being able to produce stable coordinative movements is a function of the ability to perceive relative phase, (2) the information to perceive relative phase is relative direction of motion, and (3) the ability to resolve this information is conditioned by relative speed. The first two hypotheses have been well supported (Wilson and Bingham in Percept Psychophys 70:465-476, 2008; Wilson et al. in J Exp Psychol Hum 36:1508-1514, 2010a), but the third was not supported when tested by de Rugy et al. (Exp Brain Res 184:269-273, 2008) using a visual coordination task that required simultaneous control of both the amplitude and relative phase of movement. The purposes of the current study were to replicate this task with additional measures and to modify the original model to apply it to the new task. To do this, we conducted two experiments. First, we tested the ability to produce 180° visual coordination at different frequencies to determine frequencies suitable for testing in the de Rugy et al. task. Second, we tested the de Rugy et al. task but included additional measures that yielded results different from those reported by de Rugy et al. These results were used to elaborate the original model. First, one of the phase-driven oscillators was replaced with a harmonic oscillator, so the resulting coupling was unidirectional. This change resulted in the model producing less stable 180° coordination behavior beyond 1.5 Hz consistent with the results obtained in Experiment 1. Next, amplitude control and phase correction elements were added to the model. With these changes, the model reproduced behaviors observed in Experiment 2. The central finding was that the stability of rhythmic movement coordination does depend on relative speed and, thus, all three of the hypotheses contained in the original Bingham model are supported.
- Published
- 2011
44. Human readiness to throw: the size–weight illusion is not an illusion when picking the best objects to throw
- Author
-
Qin Zhu and Geoffrey P. Bingham
- Subjects
media_common.quotation_subject ,Illusion ,Experimental and Cognitive Psychology ,Weight Perception ,Overarm throwing ,Language development ,Arts and Humanities (miscellaneous) ,Motor processes ,Perception ,Psychology ,Social psychology ,Ecology, Evolution, Behavior and Systematics ,Throwing ,media_common ,Cognitive psychology - Abstract
Long-distance throwing is uniquely human and enabled Homo sapiens to survive and even thrive during the ice ages. The precise motoric timing required relates throwing and speech abilities as dependent on the same uniquely human brain structures. Evidence from studies of brain evolution is consistent with this understanding of the evolution and success of H. sapiens. Recent theories of language development find readiness to develop language capabilities in perceptual biases that help generate ability to detect relevant higher order acoustic units that underlie speech. Might human throwing capabilities exhibit similar forms of readiness? Recently, human perception of optimal objects for long-distance throwing was found to exhibit a size–weight relation similar to the size–weight illusion; greater weights were picked for larger objects and were thrown the farthest. The size–weight illusion is: lift two objects of equal mass but different size, the larger is misperceived to be less heavy than the smaller. The illusion is reliable and robust. It persists when people know the masses are equal and handle objects properly. Children less than 2 years of age exhibit it. These findings suggest the illusion is intrinsic to humans. Here we show that perception of heaviness (including the illusion) and perception of optimal objects for throwing are equivalent. Thus, the illusion is functional, not a misperception: optimal objects for throwing are picked as having a particular heaviness. The best heaviness is learned while acquiring throwing skill. We suggest that the illusion is a perceptual bias that reflects readiness to acquire fully functional throwing ability. This unites human throwing and speaking abilities in development in a manner that is consistent with the evolutionary history.
- Published
- 2011
45. Discovering affordances that determine the spatial structure of reach-to-grasp movements
- Author
-
Mark Mon-Williams and Geoffrey P. Bingham
- Subjects
Adult ,Male ,Communication ,Visual perception ,Hand Strength ,Computer science ,business.industry ,Movement ,General Neuroscience ,GRASP ,Spatial Behavior ,Hand ,Collision ,Object (philosophy) ,Task (project management) ,Young Adult ,Human–computer interaction ,Ecological psychology ,Humans ,Female ,business ,Affordance ,Psychomotor Performance ,Collision avoidance - Abstract
Extensive research has identified the affordances used to guide actions, as originally conceived by Gibson (Perceiving, acting, and knowing: towards an ecological psychology. Erlbaum, Hillsdale, 1977; The ecological approach to visual perception. Erlbaum, Hillsdale, 1979/1986). We sought to discover the object affordance properties that determine the spatial structure of reach-to-grasp movements--movements that entail both collision avoidance and targeting. First, we constructed objects that presented a significant collision hazard and varied properties relevant to targeting, namely, object width and size of contact surface. Participants reached-to-grasp objects at three speeds (slow, normal, and fast). In Experiment 1, we explored a "stop" task where participants grasped the objects without moving them. In Experiment 2, we studied "fly-through" movements where the objects were lifted. We discovered the object affordance properties that produced covariance in the spatial structure of reaches-to-grasp. Maximum grasp aperture (MGA) reflected affordances determined by collision avoidance. Terminal grasp aperture (TGA)--when the hand stops moving but prior to finger contact--reflected affordances relevant to targeting accuracy. A model with a single free parameter predicted the prehensile spatial structure and provided a functional affordance-based account of that structure. In Experiment 3, we investigated a "slam" task where participants reached-to-grasp flat rectangular objects on a tabletop. The affordance structure of this task was found to eliminate the collision risk and thus reduced safety margins in MGA and TGA to zero for larger objects. The results emphasize the role of affordances in determining the structure and scaling of reach-to-grasp actions. Finally, we report evidence supporting the opposition vector as an appropriate unit of analysis in the study of grasping and a unit of action that maps directly to affordance properties.
- Published
- 2011
46. Modeling 3D Slant Perception: Bootstrapping 3D Affine Structure to Euclidean
- Author
-
Xiaoye Wang, Geoffrey P. Bingham, and Mats Lind
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,Structure (category theory) ,Pattern recognition ,Bootstrapping (linguistics) ,Sensory Systems ,Ophthalmology ,Perception ,Euclidean geometry ,Artificial intelligence ,Affine transformation ,business ,media_common - Published
- 2018
47. Learning a coordinated rhythmic movement with task-appropriate coordination feedback
- Author
-
Andrew D. Wilson, Geoffrey P. Bingham, Rachel O. Coats, and Winona Snapp-Childs
- Subjects
Adult ,Male ,Periodicity ,Adolescent ,Process (engineering) ,media_common.quotation_subject ,Control (management) ,Metronome ,Task (project management) ,law.invention ,Young Adult ,Rhythm ,Feedback, Sensory ,Human–computer interaction ,law ,Perception ,Humans ,Learning ,media_common ,Communication ,business.industry ,Movement (music) ,General Neuroscience ,Work (physics) ,Female ,business ,Psychology ,Photic Stimulation ,Psychomotor Performance - Abstract
A common perception-action learning task is to teach participants to produce a novel coordinated rhythmic movement, e.g. 90 degrees mean relative phase. As a general rule, people cannot produce these novel movements stably without training. This is because they are extremely poor at discriminating the perceptual information required to coordinate and control the movement, which means people require additional (augmented) feedback to learn the novel task. Extant methods (e.g. visual metronomes, Lissajous figures) work, but all involve transforming the perceptual information about the task and thus altering the perception-action task dynamic being studied. We describe and test a new method for providing online augmented coordination feedback using a neutral colour cue. This does not alter the perceptual information or the overall task dynamic, and an experiment confirms that (a) feedback is required for learning a novel coordination and (b) the new feedback method provides the necessary assistance. This task-appropriate augmented feedback therefore allows us to study the process of learning while preserving the perceptual information that constitutes a key part of the task dynamic being studied. This method is inspired by and supports a fully perception-action approach to coordinated rhythmic movement.
- Published
- 2010
48. Large perspective changes yield perception of metric shape that allows accurate feedforward reaches-to-grasp and it persists after the optic flow has stopped!
- Author
-
Geoffrey P. Bingham and Young-Lim Lee
- Subjects
Adult ,Male ,Visual perception ,Adolescent ,Computer science ,media_common.quotation_subject ,Optical flow ,Judgment ,Young Adult ,Perception ,Humans ,Computer vision ,Size Perception ,media_common ,Communication ,Hand Strength ,business.industry ,General Neuroscience ,Perspective (graphical) ,Body movement ,Form Perception ,Stereopsis ,Metric (mathematics) ,Visual Perception ,Female ,Artificial intelligence ,Percept ,business - Abstract
Lee et al. (Percept Psychophys 70:1032-1046, 2008a) investigated whether visual perception of metric shape could be calibrated when used to guide feedforward reaches-to-grasp. It could not. Seated participants viewed target objects (elliptical cylinders) in normal lighting using stereo vision and free head movements that allowed small (approximately 10 degrees) perspective changes. The authors concluded that poor perception of metric shape was the reason reaches-to-grasp should be visually guided online. However, Bingham and Lind (Percept Psychophys 70:524-540, 2008) showed that large perspective changes (> or =45 degrees) yield good perception of metric shape. So, now we repeated the Lee et al.'s study with the addition of information from large perspective changes. The results were accurate feedforward reaches-to-grasp reflecting accurate perception of both metric shape and metric size. Large perspective changes occur when one locomotes into a workspace in which reaches-to-grasp are subsequently performed. Does the resulting perception of metric shape persist after the large perspective changes have ceased? Experiments 2 and 3 tested reaches-to-grasp with delays (Exp. 2, 5-s delay; Exp. 3, approximately 16-s delay) and multiple objects to be grasped after a single viewing. Perception of metric shape and metric size persisted yielding accurate reaches-to-grasp. We advocate the study of nested actions using a dynamic approach to perception/action.
- Published
- 2010
49. Perceptual learning immediately yields new stable motor coordination
- Author
-
Geoffrey P. Bingham, Andrew D. Wilson, and Winona Snapp-Childs
- Subjects
Adult ,Male ,Feedback, Psychological ,media_common.quotation_subject ,Motion Perception ,Experimental and Cognitive Psychology ,Choice Behavior ,Task (project management) ,Developmental psychology ,Discrimination Learning ,Judgment ,Young Adult ,Behavioral Neuroscience ,Rhythm ,Arts and Humanities (miscellaneous) ,Perceptual learning ,Orientation ,Perception ,Psychophysics ,Humans ,media_common ,Movement (music) ,Two-alternative forced choice ,Retention, Psychology ,Motor control ,Middle Aged ,Motor coordination ,Pattern Recognition, Visual ,Space Perception ,Female ,Psychology ,Psychomotor Performance ,Cognitive psychology - Abstract
Coordinated rhythmic movement is specifically structured in humans. Movement at 0° mean relative phase is maximally stable, 180° is less stable, and other coordinations can, but must, be learned. Variations in perceptual ability play a key role in determining the observed stabilities so we investigated whether stable movements can be acquired by improving perceptual ability. We assessed movement stability in Baseline, Post Training, and Retention sessions by having participants use a joystick to coordinate the movement of two dots on a screen at three relative phases. Perceptual ability was also assessed using a two-alternative forced choice task in which participants identified a target phase of 90° in a pair of displays. Participants then trained with progressively harder perceptual discriminations around 90° with feedback. Improved perceptual discrimination of 90° led to improved performance in the movement task at 90° with no training in the movement task. The improvement persisted until Retention without further exposure to either task. A control group's movement stability did not improve. Movement stability is a function of perceptual ability, and information is an integral part of the organization of this dynamical system.
- Published
- 2010
50. Learning to perceive the affordance for long-distance throwing: Smart mechanism or function learning?
- Author
-
Qin Zhu and Geoffrey P. Bingham
- Subjects
Male ,Concept Formation ,Feedback, Psychological ,media_common.quotation_subject ,Motion Perception ,Experimental and Cognitive Psychology ,Models, Psychological ,Choice Behavior ,Discrimination Learning ,Judgment ,Behavioral Neuroscience ,Arts and Humanities (miscellaneous) ,Perceptual learning ,Concept learning ,Perception ,Psychophysics ,Humans ,Weight Perception ,Discrimination learning ,Affordance ,Size Perception ,media_common ,Communication ,Hand Strength ,business.industry ,Distance Perception ,Practice, Psychological ,Space Perception ,Ball (bearing) ,Female ,Motor learning ,business ,Psychology ,Psychomotor Performance ,Throwing ,Cognitive psychology - Abstract
Bingham, Schmidt, & Rosenblum, (1989) showed that people are able to select, by hefting balls, the optimal weight for each size ball to be thrown farthest. We now investigate function learning and smart mechanisms as hypotheses about how this affordance is perceived. Twenty-four unskilled adult throwers learned to throw by practicing with a subset of balls that would only allow acquisition of the ability to perceive the affordance if hefting acts as a smart mechanism to provide access to a single information variable that specifies the affordance. Participants hefted 48 balls of different sizes and weights and judged throwability. Then, participants, assigned to one of four groups, practiced throwing (three groups with vision and one without) for a month using different subsets of balls. Finally, hefting and throwing were tested again with all the balls. The results showed: (1) inability to detect throwability before practice, (2) throwing improved with practice, and (3) participants learned to perceive the affordance, but only with visual feedback. These results indicated that the affordance is perceived using a smart mechanism acquired while learning to throw.
- Published
- 2010
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.