6 results on '"delayed reaching"'
Search Results
2. When and why does motor preparation arise in recurrent neural network models of motor control?
- Author
-
Marine Schimel, Ta-Chu Kao, and Guillaume Hennequin
- Subjects
recurrent neural networks ,motor control ,motor preparation ,optimal control ,delayed reaching ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
During delayed ballistic reaches, motor areas consistently display movement-specific activity patterns prior to movement onset. It is unclear why these patterns arise: while they have been proposed to seed an initial neural state from which the movement unfolds, recent experiments have uncovered the presence and necessity of ongoing inputs during movement, which may lessen the need for careful initialization. Here, we modeled the motor cortex as an input-driven dynamical system, and we asked what the optimal way to control this system to perform fast delayed reaches is. We find that delay-period inputs consistently arise in an optimally controlled model of M1. By studying a variety of network architectures, we could dissect and predict the situations in which it is beneficial for a network to prepare. Finally, we show that optimal input-driven control of neural dynamics gives rise to multiple phases of preparation during reach sequences, providing a novel explanation for experimentally observed features of monkey M1 activity in double reaching.
- Published
- 2024
- Full Text
- View/download PDF
3. Eye–hand coordination: memory-guided grasping during obstacle avoidance.
- Author
-
Abbas, Hana H., Langridge, Ryan W., and Marotta, Jonathan J.
- Subjects
- *
EYE-hand coordination , *VISUAL memory , *COFFEE cups - Abstract
When reaching to grasp previously seen, now out-of-view objects, we rely on stored perceptual representations to guide our actions, likely encoded by the ventral visual stream. So-called memory-guided actions are numerous in daily life, for instance, as we reach to grasp a coffee cup hidden behind our morning newspaper. Little research has examined obstacle avoidance during memory-guided grasping, though it is possible obstacles with increased perceptual salience will provoke exacerbated avoidance maneuvers, like exaggerated deviations in eye and hand position away from obtrusive obstacles. We examined the obstacle avoidance strategies adopted as subjects reached to grasp a 3D target object under visually-guided (closed loop or open loop with full vision prior to movement onset) and memory-guided (short- or long-delay) conditions. On any given trial, subjects reached between a pair of flanker obstacles to grasp a target object. The positions and widths of the obstacles were manipulated, though their inner edges remained a constant distance apart. While reach and grasp behavior was consistent with the obstacle avoidance literature, in that reach, grasp, and gaze positions were biased away from obstacles most obtrusive to the reaching hand, our results reveal distinctive avoidance approaches undertaken depend on the availability of visual feedback. Contrary to expectation, we found subjects reaching to grasp after a long delay in the absence of visual feedback failed to modify their final fixation and grasp positions to accommodate the different positions of obstacles, demonstrating a more moderate, rather than exaggerative, obstacle avoidance strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Gaze-centered spatial updating in delayed reaching even in the presence of landmarks.
- Author
-
Schütz, I., Henriques, D.Y.P., and Fiehler, K.
- Subjects
- *
GAZE , *VISUAL perception , *NATURE reserves , *ERRORS , *EYE contact - Abstract
Highlights: [•] With landmarks present, reach targets are still coded and updated relative to gaze. [•] Gaze-dependent coding is found in both immediate and delayed reaching. [•] Reach errors with landmarks are less strongly influenced by gaze direction. [•] Variable errors are reduced when landmarks are available and unaffected by delay. [•] Our findings suggest a combined use of egocentric and allocentric information. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
5. Gaze strategies during visually-guided versus memory-guided grasping.
- Author
-
Prime, Steven and Marotta, Jonathan
- Subjects
- *
GAZE , *ACQUISITIVENESS , *MEMORY , *EYEGLASSES , *VISUAL evoked response , *MOTOR ability , *SENSORIMOTOR cortex , *FEEDBACK control systems - Abstract
Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action-e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under open- and closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger's grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks' centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an 'immediate grasping' task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
6. Eye-hand coordination: memory-guided grasping in a cluttered environment
- Author
-
Ivanco, Tammy (Psychology), Szturm, Tony (College of Rehabilitation Sciences), Marotta, Jonathan (Psychology), Abbas, Hana H, Ivanco, Tammy (Psychology), Szturm, Tony (College of Rehabilitation Sciences), Marotta, Jonathan (Psychology), and Abbas, Hana H
- Abstract
We often reach for remembered objects, such as when picking up a coffee cup from behind our laptop. In cases like this, we rely on visuospatial memory, encoded by the perceptual mechanisms of the ventral visual stream, to guide our actions, rather than on the real-time control of action by the dorsal visual stream (Milner & Goodale, 1995). Further, our motor plans must often accommodate for the messy spaces within which we act, avoiding irrelevant objects in our way. Little research has examined obstacle avoidance during memory-guided grasping, though it is likely obstacles perceived by the ventral stream as more salient will produce exacerbated avoidance maneuvers. This study examined how the availability of visual feedback altered eye-hand coordination in an obstacle avoidance paradigm. Eye and hand movements were monitored as subjects had to reach through a pair of obstacles in order to grasp a 3-D target object, under full visual feedback (visually-guided), immediately in the absence of visual feedback (memory-guided no-delay), or after a 2-s delay in the absence of visual feedback (memory-guided delay). Positions and widths of obstacles were manipulated, though their inner edges remained a constant distance apart. We expected the memory-guided delay group to exhibit exaggerated avoidance strategies due to a reliance on the perceptual mechanisms of the ventral steam. Results revealed successful obstacle avoidance and grasps of the target object in all groups, however different avoidance strategies emerged depending on the availability of visual feedback. The visually-guided and memory-guided no-delay groups used real-time visual information to alter the paths of the index finger and wrist and adjust final index finger positions on the target object, to account for positioned obstacles. Still, the no-delay group showed wider index finger paths and a failure to adjust final fixations, resulting from the inability to use visual information for the online control of
- Published
- 2019
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.