29 results on '"Matthew D, Golub"'
Search Results
2. Universality and individuality in neural dynamics across large populations of recurrent networks.
- Author
-
Niru Maheswaranathan, Alex H. Williams, Matthew D. Golub, Surya Ganguli, and David Sussillo
- Published
- 2019
3. Cortical preparatory activity indexes learned motor memories
- Author
-
Xulu Sun, Daniel J. O’Shea, Matthew D. Golub, Eric M. Trautmann, Saurabh Vyas, Stephen I. Ryu, and Krishna V. Shenoy
- Subjects
Multidisciplinary - Published
- 2022
4. Learning an Internal Dynamics Model from Control Demonstration.
- Author
-
Matthew D. Golub, Steven M. Chase, and Byron M. Yu
- Published
- 2013
5. Constraints on neural redundancy
- Author
-
Jay A Hennig, Matthew D Golub, Peter J Lund, Patrick T Sadtler, Emily R Oby, Kristin M Quick, Stephen I Ryu, Elizabeth C Tyler-Kabara, Aaron P Batista, Byron M Yu, and Steven M Chase
- Subjects
neural redundancy ,motor control ,brain-computer interface ,neural computation ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
Millions of neurons drive the activity of hundreds of muscles, meaning many different neural population activity patterns could generate the same movement. Studies have suggested that these redundant (i.e. behaviorally equivalent) activity patterns may be beneficial for neural computation. However, it is unknown what constraints may limit the selection of different redundant activity patterns. We leveraged a brain-computer interface, allowing us to define precisely which neural activity patterns were redundant. Rhesus monkeys made cursor movements by modulating neural activity in primary motor cortex. We attempted to predict the observed distribution of redundant neural activity. Principles inspired by work on muscular redundancy did not accurately predict these distributions. Surprisingly, the distributions of redundant neural activity and task-relevant activity were coupled, which enabled accurate predictions of the distributions of redundant activity. This suggests limits on the extent to which redundancy may be exploited by the brain for computation.
- Published
- 2018
- Full Text
- View/download PDF
6. Internal models engaged by brain-computer interface control.
- Author
-
Matthew D. Golub, Byron M. Yu, and Steven M. Chase
- Published
- 2012
- Full Text
- View/download PDF
7. Neural constraints on learning.
- Author
-
Patrick T. Sadtler, Kristin M. Quick, Matthew D. Golub, Steven M. Chase, Stephen I. Ryu, Elizabeth C. Tyler-Kabara, Byron M. Yu, and Aaron P. Batista
- Published
- 2014
- Full Text
- View/download PDF
8. Learning alters neural activity to simultaneously support memory and action
- Author
-
Darby M. Losey, Jay A. Hennig, Emily R. Oby, Matthew D. Golub, Patrick T. Sadtler, Kristin M. Quick, Stephen I. Ryu, Elizabeth C. Tyler-Kabara, Aaron P. Batista, Byron M. Yu, and Steven M. Chase
- Abstract
How are we able to learn new behaviors without disrupting previously learned ones? To understand how the brain achieves this, we used a brain-computer interface (BCI) learning paradigm, which enables us to detect the presence of a memory of one behavior while performing another. We found that learning to use a new BCI map altered the neural activity that monkeys produced when they returned to using a familiar BCI map, in a way that was specific to the learning experience. That is, learning left a “memory trace.” This memory trace co-existed with proficient performance under the familiar map, primarily by altering dimensions of neural activity that did not impact behavior. Such a memory trace could provide the neural underpinning for the joint learning of multiple motor behaviors without interference.
- Published
- 2022
9. FixedPointFinder: A Tensorflow toolbox for identifying and characterizing fixed points in recurrent neural networks.
- Author
-
Matthew D. Golub and David Sussillo
- Published
- 2018
- Full Text
- View/download PDF
10. Internal models for interpreting neural population activity during sensorimotor control
- Author
-
Matthew D Golub, Byron M Yu, and Steven M Chase
- Subjects
motor control ,internal models ,brain-machine interfaces ,rhesus macaque ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
To successfully guide limb movements, the brain takes in sensory information about the limb, internally tracks the state of the limb, and produces appropriate motor commands. It is widely believed that this process uses an internal model, which describes our prior beliefs about how the limb responds to motor commands. Here, we leveraged a brain-machine interface (BMI) paradigm in rhesus monkeys and novel statistical analyses of neural population activity to gain insight into moment-by-moment internal model computations. We discovered that a mismatch between subjects’ internal models and the actual BMI explains roughly 65% of movement errors, as well as long-standing deficiencies in BMI speed control. We then used the internal models to characterize how the neural population activity changes during BMI learning. More broadly, this work provides an approach for interpreting neural population activity in the context of how prior beliefs guide the transformation of sensory input to motor output.
- Published
- 2015
- Full Text
- View/download PDF
11. New neural activity patterns emerge with long-term learning
- Author
-
Alan D. Degenhart, Matthew D. Golub, Elizabeth C. Tyler-Kabara, Aaron P. Batista, Byron M. Yu, Jay A. Hennig, Emily R. Oby, and Steven M. Chase
- Subjects
Memory, Long-Term ,education ,Neural population ,behavioral disciplines and activities ,03 medical and health sciences ,Neural activity ,0302 clinical medicine ,Commentaries ,medicine ,Animals ,Learning ,030304 developmental biology ,Brain–computer interface ,Neurons ,0303 health sciences ,Multidisciplinary ,brain–computer interface ,Motor Cortex ,Haplorhini ,Biological Sciences ,neural population ,Long term learning ,medicine.anatomical_structure ,Motor Skills ,Brain-Computer Interfaces ,Causal link ,skill learning ,Nerve Net ,Psychology ,Neuroscience ,030217 neurology & neurosurgery ,Motor cortex - Abstract
Significance Consider a skill you would like to learn, like playing the piano. How do you progress from “Chopsticks” to Chopin? As you learn to do something new with your hands, does the brain also do something new? We found that monkeys learned new skilled behavior by generating new neural activity patterns. We used a brain–computer interface (BCI), which directly links neural activity to movement of a computer cursor, to encourage animals to generate new neural activity patterns. Over several days, the animals began to exhibit new patterns of neural activity that enabled them to control the BCI cursor. This suggests that learning to play the piano and other skills might also involve the generation of new neural activity patterns., Learning has been associated with changes in the brain at every level of organization. However, it remains difficult to establish a causal link between specific changes in the brain and new behavioral abilities. We establish that new neural activity patterns emerge with learning. We demonstrate that these new neural activity patterns cause the new behavior. Thus, the formation of new patterns of neural population activity can underlie the learning of new skills.
- Published
- 2019
12. Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics
- Author
-
Niru, Maheswaranathan, Alex H, Williams, Matthew D, Golub, Surya, Ganguli, and David, Sussillo
- Subjects
Article - Abstract
Recurrent neural networks (RNNs) are a widely used tool for modeling sequential data, yet they are often treated as inscrutable black boxes. Given a trained recurrent network, we would like to reverse engineer it–to obtain a quantitative, interpretable description of how it solves a particular task. Even for simple tasks, a detailed understanding of how recurrent networks work, or a prescription for how to develop such an understanding, remains elusive. In this work, we use tools from dynamical systems analysis to reverse engineer recurrent networks trained to perform sentiment classification, a foundational natural language processing task. Given a trained network, we find fixed points of the recurrent dynamics and linearize the nonlinear system around these fixed points. Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. In particular, the topological structure of the fixed points and corresponding linearized dynamics reveal an approximate line attractor within the RNN, which we can use to quantitatively understand how the RNN solves the sentiment analysis task. Finally, we find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs) trained on multiple datasets, suggesting that our findings are not unique to a particular architecture or dataset. Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.
- Published
- 2020
13. Universality and individuality in neural dynamics across large populations of recurrent networks
- Author
-
Niru, Maheswaranathan, Alex H, Williams, Matthew D, Golub, Surya, Ganguli, and David, Sussillo
- Subjects
Article - Abstract
Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of the model with the brain, for example using canonical correlation analysis (CCA). However, the nature of the detailed neurobiological inferences one can draw from such efforts remains elusive. For example, to what extent does training neural networks to solve common tasks uniquely determine the network dynamics, independent of modeling architectural choices? Or alternatively, are the learned dynamics highly sensitive to different model choices? Knowing the answer to these questions has strong implications for whether and how we should use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics. We find the geometry of the RNN representations can be highly sensitive to different network architectures, yielding a cautionary tale for measures of similarity that rely on representational geometry, such as CCA. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold—the topological structure of fixed points, transitions between them, limit cycles, and linearized dynamics—often appears universal across all architectures.
- Published
- 2020
14. Computation Through Neural Population Dynamics
- Author
-
Krishna V. Shenoy, Saurabh Vyas, David Sussillo, and Matthew D. Golub
- Subjects
Structure (mathematical logic) ,Neurons ,Theoretical computer science ,Dynamical systems theory ,Computer science ,General Neuroscience ,Computation ,Population Dynamics ,Experimental data ,Motor control ,Brain ,Computational Biology ,Article ,Term (time) ,Models of neural computation ,Deep Learning ,Dynamics (music) ,Animals ,Humans ,Nerve Net ,Neuroscience - Abstract
Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.
- Published
- 2020
15. Learning is shaped by abrupt changes in neural engagement
- Author
-
Byron M. Yu, Jay A. Hennig, Lindsay A Bahureksa, Stephen I. Ryu, Steven M. Chase, Aaron P. Batista, Elizabeth C. Tyler-Kabara, Matthew D. Golub, Kristin M. Quick, Patrick T. Sadtler, and Emily R. Oby
- Subjects
Neural activity ,education.field_of_study ,Process (engineering) ,Population ,Neural population ,education ,Psychology ,Affect (psychology) ,Cognitive psychology ,Task (project management) ,Brain–computer interface ,Arousal - Abstract
Internal states such as arousal, attention, and motivation are known to modulate brain-wide neural activity, but how these processes interact with learning is not well understood. During learning, the brain must modify the neural activity it produces to improve behavioral performance. How do internal states affect the evolution of this learning process? Using a brain-computer interface (BCI) learning paradigm in non-human primates, we identified large fluctuations in neural population activity in motor cortex (M1) indicative of arousal-like internal state changes. These fluctuations drove population activity along dimensions we term neural engagement axes. Neural engagement increased abruptly at the start of learning, and then gradually retreated. In a BCI, the causal relationship between neural activity and behavior is known. This allowed us to understand how these changes impacted behavioral performance for different task goals. We found that neural engagement interacted with learning, helping to explain why animals learned some task goals more quickly than others.
- Published
- 2020
16. Learning is shaped by abrupt changes in neural engagement
- Author
-
Aaron P. Batista, Matthew D. Golub, Elizabeth C. Tyler-Kabara, Stephen I. Ryu, Steven M. Chase, Kristin M. Quick, Byron M. Yu, Jay A. Hennig, Lindsay A Bahureksa, Emily R. Oby, and Patrick T. Sadtler
- Subjects
0301 basic medicine ,Male ,Process (engineering) ,education ,Action Potentials ,Neural population ,Affect (psychology) ,Article ,Task (project management) ,Arousal ,03 medical and health sciences ,Neural activity ,0302 clinical medicine ,Animals ,Learning ,Attention ,Neurons ,General Neuroscience ,Motor Cortex ,Macaca mulatta ,030104 developmental biology ,Brain-Computer Interfaces ,Psychology ,Neuroscience ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
Internal states such as arousal, attention and motivation modulate brain-wide neural activity, but how these processes interact with learning is not well understood. During learning, the brain modifies its neural activity to improve behavior. How do internal states affect this process? Using a brain-computer interface learning paradigm in monkeys, we identified large, abrupt fluctuations in neural population activity in motor cortex indicative of arousal-like internal state changes, which we term 'neural engagement.' In a brain-computer interface, the causal relationship between neural activity and behavior is known, allowing us to understand how neural engagement impacted behavioral performance for different task goals. We observed stereotyped changes in neural engagement that occurred regardless of how they impacted performance. This allowed us to predict how quickly different task goals were learned. These results suggest that changes in internal states, even those seemingly unrelated to goal-seeking behavior, can systematically influence how behavior improves with learning.
- Published
- 2020
17. Cortical preparatory activity indexes learned motor memories
- Author
-
Xulu, Sun, Daniel J, O'Shea, Matthew D, Golub, Eric M, Trautmann, Saurabh, Vyas, Stephen I, Ryu, and Krishna V, Shenoy
- Subjects
Motor Skills ,Movement ,Motor Cortex ,Animals ,Learning ,Muscle, Skeletal ,Macaca mulatta - Abstract
The brain's remarkable ability to learn and execute various motor behaviours harnesses the capacity of neural populations to generate a variety of activity patterns. Here we explore systematic changes in preparatory activity in motor cortex that accompany motor learning. We trained rhesus monkeys to learn an arm-reaching task
- Published
- 2020
18. Author response: Constraints on neural redundancy
- Author
-
Elizabeth C. Tyler-Kabara, Patrick T. Sadtler, Byron M. Yu, Matthew D. Golub, Aaron P. Batista, Jay A. Hennig, Steven M. Chase, Peter J Lund, Stephen I. Ryu, Emily R. Oby, and Kristin M. Quick
- Subjects
Computer science ,Redundancy (engineering) ,Reliability engineering - Published
- 2018
19. Computation through Cortical Dynamics
- Author
-
Laura N. Driscoll, Matthew D. Golub, and David Sussillo
- Subjects
0301 basic medicine ,Neurons ,education.field_of_study ,Frontal cortex ,Quantitative Biology::Neurons and Cognition ,Computer science ,business.industry ,General Neuroscience ,Computation ,Computer Science::Neural and Evolutionary Computation ,Population ,Pattern recognition ,Article ,Frontal Lobe ,03 medical and health sciences ,ComputingMethodologies_PATTERNRECOGNITION ,030104 developmental biology ,0302 clinical medicine ,Order (biology) ,Frontal lobe ,Dynamics (music) ,Artificial intelligence ,education ,business ,030217 neurology & neurosurgery - Abstract
Neural mechanisms that support flexible sensorimotor computations are not well understood. In a dynamical system whose state is determined by interactions among neurons, computations can be rapidly reconfigured by controlling the system’s inputs and initial conditions. To investigate whether the brain employs such control mechanisms, we recorded from the dorsomedial frontal cortex of monkeys trained to measure and produce time intervals in two sensorimotor contexts. The geometry of neural trajectories during the production epoch was consistent with a mechanism wherein the measured interval and sensorimotor context exerted control over the cortical dynamics by adjusting the system’s initial condition and input, respectively. These adjustments, in turn, set the speed at which activity evolved in the production epoch allowing the animal to flexibly produce different time intervals. These results provide evidence that the language of dynamical systems can be used to parsimoniously link brain activity to sensorimotor computations.
- Published
- 2018
20. Constraints on neural redundancy
- Author
-
Stephen I. Ryu, Emily R. Oby, Elizabeth C. Tyler-Kabara, Matthew D. Golub, Kristin M. Quick, Aaron P. Batista, Jay A. Hennig, Steven M. Chase, Patrick T. Sadtler, Peter J Lund, and Byron M. Yu
- Subjects
0301 basic medicine ,QH301-705.5 ,Computer science ,Science ,Computation ,Interface (computing) ,Movement ,Models, Neurological ,neural redundancy ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,0302 clinical medicine ,Models of neural computation ,neural computation ,Neural Pathways ,Rhesus macaque ,Redundancy (engineering) ,motor control ,Animals ,Limit (mathematics) ,Biology (General) ,Brain–computer interface ,Neurons ,General Immunology and Microbiology ,business.industry ,General Neuroscience ,brain-computer interface ,Motor Cortex ,Motor control ,Pattern recognition ,General Medicine ,Macaca mulatta ,030104 developmental biology ,Brain-Computer Interfaces ,Medicine ,Artificial intelligence ,Primary motor cortex ,business ,030217 neurology & neurosurgery ,Psychomotor Performance ,Research Article ,Neuroscience - Abstract
Millions of neurons drive the activity of hundreds of muscles, meaning many different neural population activity patterns could generate the same movement. Studies have suggested that these redundant (i.e. behaviorally equivalent) activity patterns may be beneficial for neural computation. However, it is unknown what constraints may limit the selection of different redundant activity patterns. We leveraged a brain-computer interface, allowing us to define precisely which neural activity patterns were redundant. Rhesus monkeys made cursor movements by modulating neural activity in primary motor cortex. We attempted to predict the observed distribution of redundant neural activity. Principles inspired by work on muscular redundancy did not accurately predict these distributions. Surprisingly, the distributions of redundant neural activity and task-relevant activity were coupled, which enabled accurate predictions of the distributions of redundant activity. This suggests limits on the extent to which redundancy may be exploited by the brain for computation., eLife digest When you swing a tennis racket, muscles in your arm contract in a specific sequence. For this to happen, millions of neurons in your brain and spinal cord must fire to make those muscles contract. If you swing the racket a second time, the same muscles in your arm will contract again. But the firing pattern of the underlying neurons will probably be different. This phenomenon, in which different patterns of neural activity generate the same outcome, is called neural redundancy. Neural redundancy allows a set of neurons to perform multiple tasks at once. For example, the same neurons may drive an arm movement while simultaneously planning the next activity. But does performing a given task constrain how often different patterns of neural activity can be produced? If so, this would limit whether other tasks could be carried out at the same time. To address this, Hennig et al. trained macaque monkeys to use a brain-computer interface (BCI). This is a device that reads out electrical brain activity and converts it into signals that can be used to control another device. The key advantage of a BCI is that the redundant activity patterns are precisely known. The monkeys learned to use their brain activity, via the BCI, to move a cursor on a computer screen in different directions. The results revealed that monkeys could only produce a limited number of different patterns of brain activity for a given BCI cursor movement. This suggests that the ability of a group of neurons to multitask is restricted. For example, if the same set of neurons is involved in both planning and performing movements, then an animal’s ability to plan a future movement will depend on the one it is currently performing. BCIs can help patients who have suffered stroke or paralysis. They enable patients to use their brain activity to control a computer or even robotic limbs. Understanding how the brain controls BCIs will help us improve their performance and deepen our knowledge of how the brain plans and performs movements. This might include designing BCIs that allow users to multitask more effectively.
- Published
- 2018
21. Learning by neural reassociation
- Author
-
Matthew D. Golub, Emily R. Oby, Steven M. Chase, Elizabeth C. Tyler-Kabara, Stephen I. Ryu, Byron M. Yu, Patrick T. Sadtler, Aaron P. Batista, and Kristin M. Quick
- Subjects
0301 basic medicine ,Male ,Computer science ,Population ,Models, Neurological ,Action Potentials ,Brain mapping ,Article ,Task (project management) ,03 medical and health sciences ,Neural activity ,0302 clinical medicine ,medicine ,Animals ,Learning ,education ,Brain–computer interface ,Neurons ,education.field_of_study ,Brain Mapping ,General Neuroscience ,Repertoire ,Motor Cortex ,Macaca mulatta ,Rats ,030104 developmental biology ,medicine.anatomical_structure ,Brain-Computer Interfaces ,Primary motor cortex ,Neuroscience ,030217 neurology & neurosurgery ,Psychomotor Performance ,Motor cortex - Abstract
Behavior is driven by coordinated activity across a population of neurons. Learning requires the brain to change the neural population activity produced to achieve a given behavioral goal. How does population activity reorganize during learning? We studied intracortical population activity in the primary motor cortex of rhesus macaques during short-term learning in a brain-computer interface (BCI) task. In a BCI, the mapping between neural activity and behavior is exactly known, enabling us to rigorously define hypotheses about neural reorganization during learning. We found that changes in population activity followed a suboptimal neural strategy of Reassociation: animals relied on a fixed repertoire of activity patterns and associated those patterns with different movements after learning. These results indicate that the activity patterns that a neural population can generate are even more constrained than previously thought and might explain why it is often difficult to quickly learn to a high level of proficiency.
- Published
- 2017
22. Motor cortical control of movement speed with implications for brain-machine interface control
- Author
-
Byron M. Yu, Matthew D. Golub, Andrew B. Schwartz, and Steven M. Chase
- Subjects
Male ,education.field_of_study ,Communication ,Electronic speed control ,Physiology ,Computer science ,business.industry ,Movement ,General Neuroscience ,Population ,Motor Cortex ,Motor control ,Context (language use) ,Pattern recognition ,Articles ,Kinematics ,Kalman filter ,Macaca mulatta ,Brain-Computer Interfaces ,Animals ,Artificial intelligence ,Neural coding ,education ,business ,Brain–computer interface - Abstract
Motor cortex plays a substantial role in driving movement, yet the details underlying this control remain unresolved. We analyzed the extent to which movement-related information could be extracted from single-trial motor cortical activity recorded while monkeys performed center-out reaching. Using information theoretic techniques, we found that single units carry relatively little speed-related information compared with direction-related information. This result is not mitigated at the population level: simultaneously recorded population activity predicted speed with significantly lower accuracy relative to direction predictions. Furthermore, a unit-dropping analysis revealed that speed accuracy would likely remain lower than direction accuracy, even given larger populations. These results suggest that the instantaneous details of single-trial movement speed are difficult to extract using commonly assumed coding schemes. This apparent paucity of speed information takes particular importance in the context of brain-machine interfaces (BMIs), which rely on extracting kinematic information from motor cortex. Previous studies have highlighted subjects' difficulties in holding a BMI cursor stable at targets. These studies, along with our finding of relatively little speed information in motor cortex, inspired a speed-dampening Kalman filter (SDKF) that automatically slows the cursor upon detecting changes in decoded movement direction. Effectively, SDKF enhances speed control by using prevalent directional signals, rather than requiring speed to be directly decoded from neural activity. SDKF improved success rates by a factor of 1.7 relative to a standard Kalman filter in a closed-loop BMI task requiring stable stops at targets. BMI systems enabling stable stops will be more effective and user-friendly when translated into clinical applications.
- Published
- 2014
23. Neural constraints on learning
- Author
-
Byron M. Yu, Matthew D. Golub, Elizabeth C. Tyler-Kabara, Aaron P. Batista, Kristin M. Quick, Patrick T. Sadtler, Steven M. Chase, and Stephen I. Ryu
- Subjects
Brain Mapping ,Multidisciplinary ,Computational neuroscience ,Nerve net ,business.industry ,Interface (computing) ,Brain ,Cognition ,Sensory system ,Pattern recognition ,Electroencephalography ,Article ,User-Computer Interface ,medicine.anatomical_structure ,Brain-Computer Interfaces ,medicine ,Biological neural network ,Humans ,Learning ,Artificial intelligence ,Neural Networks, Computer ,business ,Motor skill ,Brain–computer interface - Abstract
Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain-computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain-computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess.
- Published
- 2014
24. Brain–computer interfaces for dissecting cognitive processes underlying sensorimotor control
- Author
-
Byron M. Yu, Matthew D. Golub, Aaron P. Batista, and Steven M. Chase
- Subjects
0301 basic medicine ,General Neuroscience ,Interface (computing) ,Multisensory integration ,Sensory system ,Cognition ,Article ,Sensory Physiology ,Sensorimotor control ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,InformationSystems_MODELSANDPRINCIPLES ,Neurology ,Feedback, Sensory ,Brain-Computer Interfaces ,Multiple modalities ,Psychology ,Neuroscience ,030217 neurology & neurosurgery ,Brain–computer interface - Abstract
Sensorimotor control engages cognitive processes such as prediction, learning, and multisensory integration. Understanding the neural mechanisms underlying these cognitive processes with arm reaching is challenging because we currently record only a fraction of the relevant neurons, the arm has nonlinear dynamics, and multiple modalities of sensory feedback contribute to control. A brain–computer interface (BCI) is a well-defined sensorimotor loop with key simplifying advantages that address each of these challenges, while engaging similar cognitive processes. As a result, BCI is becoming recognized as a powerful tool for basic scientific studies of sensorimotor control. Here, we describe the benefits of BCI for basic scientific inquiries and review recent BCI studies that have uncovered new insights into the neural mechanisms underlying sensorimotor control.
- Published
- 2016
25. Internal models for interpreting neural population activity during sensorimotor control
- Author
-
Byron M. Yu, Matthew D. Golub, and Steven M. Chase
- Subjects
QH301-705.5 ,Process (engineering) ,Computer science ,Science ,Movement ,Interface (computing) ,Models, Neurological ,education ,Internal model ,Context (language use) ,Sensory system ,General Biochemistry, Genetics and Molecular Biology ,internal models ,Feedback, Sensory ,motor control ,Animals ,Biology (General) ,brain-machine interfaces ,Brain–computer interface ,General Immunology and Microbiology ,General Neuroscience ,Work (physics) ,Motor control ,Extremities ,General Medicine ,Macaca mulatta ,Brain-Computer Interfaces ,Medicine ,Sensorimotor Cortex ,Other ,Psychomotor Performance ,Research Article ,Neuroscience ,rhesus macaque ,Cognitive psychology - Abstract
To successfully guide limb movements, the brain takes in sensory information about the limb, internally tracks the state of the limb, and produces appropriate motor commands. It is widely believed that this process uses an internal model, which describes our prior beliefs about how the limb responds to motor commands. Here, we leveraged a brain-machine interface (BMI) paradigm in rhesus monkeys and novel statistical analyses of neural population activity to gain insight into moment-by-moment internal model computations. We discovered that a mismatch between subjects’ internal models and the actual BMI explains roughly 65% of movement errors, as well as long-standing deficiencies in BMI speed control. We then used the internal models to characterize how the neural population activity changes during BMI learning. More broadly, this work provides an approach for interpreting neural population activity in the context of how prior beliefs guide the transformation of sensory input to motor output. DOI: http://dx.doi.org/10.7554/eLife.10015.001, eLife digest The human brain is widely hypothesized to construct “inner beliefs” about how the world works. It is thought that we need this conception to coordinate our movements and anticipate rapid events that go on around us. A driver, for example, needs to predict how the car should behave in response to every turn of the steering wheel and every tap on the brake. But on icy roads, these predictions will often not reflect how the car would behave. Applying the brakes sharply in these conditions could send the car skidding uncontrollably rather than stopping. In general, a mismatch between one’s inner beliefs and reality is thought to cause errors and accidents. Yet this compelling hypothesis has not yet been fully investigated. Golub et al. investigated this hypothesis by conducting a “brain-machine interface” experiment. In this experiment, neural signals from the brains of two rhesus macaques were recorded using arrays of electrodes and translated into movements of a cursor on a computer screen. The monkeys were then trained to mentally move the cursor to hit targets on the screen. The monkeys’ cursor movements were remarkably precise. In fact, the experiment showed that the monkeys could internally predict their cursor movements just as a driver predicts how a car will move when turning the steering wheel. These findings indicate that the monkeys have likely developed inner beliefs to predict how their neural signals drive the cursor, and that these beliefs helped coordinate their performance. In addition, when the monkeys did make mistakes, their neural signals were not entirely wrong—in fact they were typically consistent with the monkeys’ inner beliefs about how the cursor moves. A mismatch between these inner beliefs and reality explained most of the monkeys’ mistakes. The brain constructs such inner beliefs over time through experience and learning. To study this learning process, Golub et al. next conducted an experiment in which the cursor moved in a way that was substantially different from the monkey’s inner beliefs. This experiment uncovered that, during the course of learning, the monkey’s inner beliefs realigned to better match the movements of the new cursor. Taken together, this work provides a framework for understanding how the brain transforms sensory information into instructions for movement. The findings could also help improve the performance of brain-machine interfaces and suggest how we can learn new skills more rapidly and proficiently in everyday life. DOI: http://dx.doi.org/10.7554/eLife.10015.002
- Published
- 2015
26. Author response: Internal models for interpreting neural population activity during sensorimotor control
- Author
-
Byron M. Yu, Matthew D. Golub, and Steven M. Chase
- Subjects
Sensorimotor control ,Neural population ,Psychology ,Neuroscience - Published
- 2015
27. Publisher Correction: Learning by neural reassociation
- Author
-
Emily R. Oby, Kristin M. Quick, Aaron P. Batista, Stephen I. Ryu, Elizabeth C. Tyler-Kabara, Matthew D. Golub, Steven M. Chase, Patrick T. Sadtler, and Byron M. Yu
- Subjects
Computer science ,General Neuroscience ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,MathematicsofComputing_NUMERICALANALYSIS ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Element (category theory) ,Neuroscience ,Algorithm - Abstract
In the version of this article initially published, equation (10) contained cos Θ instead of sin Θ as the bottom element of the right-hand vector. The error has been corrected in the HTML and PDF versions of the article.
- Published
- 2018
28. Learning an Internal Dynamics Model from Control Demonstration
- Author
-
Matthew D, Golub, Steven M, Chase, and Byron M, Yu
- Subjects
education ,health care economics and organizations ,Article - Abstract
Much work in optimal control and inverse control has assumed that the controller has perfect knowledge of plant dynamics. However, if the controller is a human or animal subject, the subject’s internal dynamics model may differ from the true plant dynamics. Here, we consider the problem of learning the subject’s internal model from demonstrations of control and knowledge of task goals. Due to sensory feedback delay, the subject uses an internal model to generate an internal prediction of the current plant state, which may differ from the actual plant state. We develop a probabilistic framework and exact EM algorithm to jointly estimate the internal model, internal state trajectories, and feedback delay. We applied this framework to demonstrations by a nonhuman primate of brain-machine interface (BMI) control. We discovered that the subject’s internal model deviated from the true BMI plant dynamics and provided significantly better explanation of the recorded neural control signals than did the true plant dynamics.
- Published
- 2014
29. Internal models engaged by brain-computer interface control
- Author
-
Matthew D. Golub, Steven M. Chase, and Byron M. Yu
- Subjects
Neurons ,Computer science ,Interface (computing) ,education ,Models, Neurological ,Motor control ,Brain ,Sensory system ,Cursor (databases) ,Macaca mulatta ,Article ,Electrodes, Implanted ,Sensory Physiology ,medicine.anatomical_structure ,Feedback, Sensory ,Brain-Computer Interfaces ,Task Performance and Analysis ,medicine ,Arm ,Animals ,Motor learning ,Neuroscience ,Simulation ,Brain–computer interface ,Motor cortex - Abstract
Internal models have been proposed to explain the brain's ability to compensate for sensory feedback delays by predicting the sensory consequences of movement commands. Single-neuron studies in the oculomotor and vestibulo-ocular systems have provided evidence of internal models, as have behavioral studies in the skeletomotor system. Here, we present evidence of internal models from simultaneously recorded population activity underlying closed-loop brain-computer interface (BCI) control. We studied cursor-based BCI control by a nonhuman primate implanted with a multi-electrode array in motor cortex. Using a novel BCI task, we measured the visual feedback processing delay to be about 130 milliseconds. By examining the task-based appropriateness of the population activity at different time lags, we found evidence that the subject compensates for the feedback delay by predicting upcoming cursor positions, suggesting the use of an internal forward model. Lastly, we examined the time course of internal model adaptation after altering the mapping between population activity and cursor movements. This study suggests that closed-loop BCI experiments combined with novel statistical analyses can provide insight into the neural substrates of feedback motor control and motor learning.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.