Back to Search
Start Over
Neural Control of Actions Involving Different Coordinate Systems
- Source :
- Humanoid Robots, Human-like Machines
- Publication Year :
- 2021
- Publisher :
- IntechOpen, 2021.
-
Abstract
- The human body has a complex shape requiring a control structure of matching complexity. This involves keeping track of several body parts that are best represented in different frames of reference (coordinate systems). In performing a complex action, representations in more than one system are active at a time, and switches from one set of coordinate systems to another are performed. During a simple act of grasping, for example, an object is represented in a purely visual, retina-centered coordinate system and is transformed into headand body-centered representations. On the control side, 3-dimensional movement fields, found in the motor cortex, surround the body and determine the goal position of a reaching movement. A conceptual, object-centered coordinate space representing the difference between target object and hand position may be used for movement corrections near the end of grasping. As a guideline for the development of more sophisticated robotic actions, we take inspiration from the brain. A cortical area represents information about an object or an actuator in a specific coordinate system. This view is generalized in the light of population coding and distributed object representations. In the motor system, neurons represent motor "affordances" which code for certain configurations of objectand effector positions, while mirror neurons code actions in an abstract fashion. One challenge to the technological development of a robotic / humanoid action control system is – besides vision – its complexity, another is learning. One must explain the cortical mechanisms which support the several processing stages that transform retinal stimulation into the mirror neuron and motor neuron responses (Oztop et al., 2006). Recently, we have trained a frame of reference transformation network by unsupervised learning (Weber & Wermter, 2006). It transforms between representations in two reference frames which may dynamically change their position to each other. For example the mapping between retinal and body-centered coordinates while the eyes may move. We will briefly but concisely present this self-organizing network in the context of grasping. We will also discuss mechanisms required for unsupervised learning such as requested slowness of neuronal response changes in those frames of reference that tend to remain constant during a task. This book chapter shall guide and inspire the development of sensory-motor control strategies for humanoids. This book chapter is organized as follows. Section 2 reviews neurobiological findings; Section 3 reviews robotic research. Then, after motivating learning in Section 4, we will
Details
- Language :
- English
- Database :
- OpenAIRE
- Journal :
- Humanoid Robots, Human-like Machines
- Accession number :
- edsair.doi.dedup.....b1f479ca6d5187a97499481f1332835d