34 results on '"Kechen Zhang"'
Search Results
2. Author Correction: Online learning for orientation estimation during translation in an insect ring attractor network
- Author
-
Brian S. Robinson, Raphael Norman-Tenazas, Martha Cervantes, Danilo Symonette, Erik C. Johnson, Justin Joyce, Patricia K. Rivlin, Grace M. Hwang, Kechen Zhang, and William Gray-Roncal
- Subjects
Multidisciplinary - Published
- 2022
- Full Text
- View/download PDF
3. Online learning for orientation estimation during translation in an insect ring attractor network
- Author
-
Brian S. Robinson, Raphael Norman-Tenazas, Martha Cervantes, Danilo Symonette, Erik C. Johnson, Justin Joyce, Patricia K. Rivlin, Grace M. Hwang, Kechen Zhang, and William Gray-Roncal
- Subjects
Education, Distance ,Insecta ,Multidisciplinary ,Animals ,Drosophila ,Nervous System ,Algorithms - Abstract
Insect neural systems are a promising source of inspiration for new navigation algorithms, especially on low size, weight, and power platforms. There have been unprecedented recent neuroscience breakthroughs with Drosophila in behavioral and neural imaging experiments as well as the mapping of detailed connectivity of neural structures. General mechanisms for learning orientation in the central complex (CX) of Drosophila have been investigated previously; however, it is unclear how these underlying mechanisms extend to cases where there is translation through an environment (beyond only rotation), which is critical for navigation in robotic systems. Here, we develop a CX neural connectivity-constrained model that performs sensor fusion, as well as unsupervised learning of visual features for path integration; we demonstrate the viability of this circuit for use in robotic systems in simulated and physical environments. Furthermore, we propose a theoretical understanding of how distributed online unsupervised network weight modification can be leveraged for learning in a trajectory through an environment by minimizing orientation estimation error. Overall, our results may enable a new class of CX-derived low power robotic navigation algorithms and lead to testable predictions to inform future neuroscience experiments.
- Published
- 2022
- Full Text
- View/download PDF
4. Salt-Mediated Continuous Phase Transformation in G-C3n4: From Regulated Atomic Configurations to Enhanced Photocatalysis
- Author
-
Jun Xiao and Kechen Zhang
- Published
- 2022
- Full Text
- View/download PDF
5. Online learning for orientation estimation during translation in an insect ring attractor network
- Author
-
Martha Cervantes, William Gray-Roncal, Grace M. Hwang, Danilo Symonette, Patricia K. Rivlin, Justin Joyce, Brian S. Robinson, Kechen Zhang, Erik C. B. Johnson, and Raphael Norman-Tenazas
- Subjects
Computer science ,business.industry ,Orientation (computer vision) ,Unsupervised learning ,Artificial intelligence ,Sensor fusion ,Translation (geometry) ,business ,Machine learning ,computer.software_genre ,Rotation (mathematics) ,computer ,Attractor network - Abstract
Insect neural systems are a promising source of inspiration for new algorithms for navigation, especially on low size, weight, and power platforms. There have been unprecedented recent neuroscience breakthroughs with Drosophila in behavioral and neural imaging experiments as well as the mapping of detailed connectivity of neural structures. General mechanisms for learning orientation in the central complex (CX) of Drosophila have been investigated previously; however, it is unclear how these underlying mechanisms extend to cases where there is translation through an environment (beyond only rotation), which is critical for navigation in robotic systems. Here, we develop a CX neural connectivity-constrained model that performs sensor fusion, as well as unsupervised learning of visual features for path integration; we demonstrate the viability of this circuit for use in robotic systems in simulated and physical environments. Furthermore, we propose a theoretical understanding of how distributed online unsupervised network weight modification can be leveraged for learning in a trajectory through an environment by minimizing of orientation estimation error. Overall, our results here may enable a new class of CX-derived low power robotic navigation algorithms and lead to testable predictions to inform future neuroscience experiments.SummaryAn insect neural connectivity-constrained model performs sensor fusion and online learning for orientation estimation.
- Published
- 2021
- Full Text
- View/download PDF
6. Fitting of dynamic recurrent neural network models to sensory stimulus-response data
- Author
-
Kechen Zhang and R. Ozgur Doruk
- Subjects
0301 basic medicine ,Sensory Receptor Cells ,Computer science ,Models, Neurological ,Biophysics ,Action Potentials ,Sensory system ,Stimulus (physiology) ,Poisson distribution ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Reaction Time ,Animals ,Molecular Biology ,Fourier series ,Original Paper ,Quantitative Biology::Neurons and Cognition ,Artificial neural network ,business.industry ,Pattern recognition ,Cell Biology ,Electric Stimulation ,Atomic and Molecular Physics, and Optics ,030104 developmental biology ,Amplitude ,Recurrent neural network ,symbols ,Neural Networks, Computer ,Artificial intelligence ,business ,Likelihood function ,Photic Stimulation ,030217 neurology & neurosurgery - Abstract
We present a theoretical study aiming at model fitting for sensory neurons. Conventional neural network training approaches are not applicable to this problem due to lack of continuous data. Although the stimulus can be considered as a smooth time-dependent variable, the associated response will be a set of neural spike timings (roughly the instants of successive action potential peaks) that have no amplitude information. A recurrent neural network model can be fitted to such a stimulus-response data pair by using the maximum likelihood estimation method where the likelihood function is derived from Poisson statistics of neural spiking. The universal approximation feature of the recurrent dynamical neuron network models allows us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. The stimulus data are generated by a phased cosine Fourier series having a fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size, and sample size are applied in order to examine the effect of the stimulus to the identification process. Results are presented in tabular and graphical forms at the end of this text. In addition, to demonstrate the success of this research, a study involving the same model, nominal parameters and stimulus structure, and another study that works on different models are compared to that of this research.
- Published
- 2018
- Full Text
- View/download PDF
7. Cognitive swarming: an approach from the theoretical neuroscience of hippocampal function
- Author
-
Joseph D. Monaco, Kevin Schultz, Kechen Zhang, and Grace M. Hwang
- Subjects
Self-organization ,Cognitive science ,Computational neuroscience ,Computer science ,Swarming (honey bee) ,Cognition ,Hippocampal function ,Spatial memory - Published
- 2019
- Full Text
- View/download PDF
8. Spatial synchronization codes from coupled rate-phase neurons
- Author
-
Joseph D. Monaco, Kechen Zhang, Hugh T. Blair, and Rose M. De Guzman
- Subjects
0301 basic medicine ,Male ,Physiology ,Precession ,Action Potentials ,Hippocampal formation ,Spatial memory ,Hippocampus ,Phase Determination ,0302 clinical medicine ,Animal Cells ,Medicine and Health Sciences ,Cortical Synchronization ,Theta Rhythm ,lcsh:QH301-705.5 ,Physics ,Neurons ,Coding Mechanisms ,Ecology ,Simulation and Modeling ,Classical Mechanics ,Brain ,Electrophysiology ,Computational Theory and Mathematics ,Modeling and Simulation ,Physical Sciences ,Crystallographic Techniques ,Cellular Types ,Anatomy ,Spatial Navigation ,Research Article ,Computer and Information Sciences ,Neural Networks ,Thalamus ,Models, Neurological ,Neurophysiology ,Sensory system ,Research and Analysis Methods ,Membrane Potential ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Path integration ,Genetics ,Animals ,Rats, Long-Evans ,Molecular Biology ,Ecology, Evolution, Behavior and Systematics ,Computational Neuroscience ,Computational neuroscience ,Computational Biology ,Biology and Life Sciences ,Cell Biology ,Phaser ,Rats ,030104 developmental biology ,lcsh:Biology (General) ,Cellular Neuroscience ,Idiothetic ,Neuroscience ,030217 neurology & neurosurgery - Abstract
During spatial navigation, the frequency and timing of spikes from spatial neurons including place cells in hippocampus and grid cells in medial entorhinal cortex are temporally organized by continuous theta oscillations (6–11 Hz). The theta rhythm is regulated by subcortical structures including the medial septum, but it is unclear how spatial information from place cells may reciprocally organize subcortical theta-rhythmic activity. Here we recorded single-unit spiking from a constellation of subcortical and hippocampal sites to study spatial modulation of rhythmic spike timing in rats freely exploring an open environment. Our analysis revealed a novel class of neurons that we termed ‘phaser cells,’ characterized by a symmetric coupling between firing rate and spike theta-phase. Phaser cells encoded space by assigning distinct phases to allocentric isocontour levels of each cell’s spatial firing pattern. In our dataset, phaser cells were predominantly located in the lateral septum, but also the hippocampus, anteroventral thalamus, lateral hypothalamus, and nucleus accumbens. Unlike the unidirectional late-to-early phase precession of place cells, bidirectional phase modulation acted to return phaser cells to the same theta-phase along a given spatial isocontour, including cells that characteristically shifted to later phases at higher firing rates. Our dynamical models of intrinsic theta-bursting neurons demonstrated that experience-independent temporal coding mechanisms can qualitatively explain (1) the spatial rate-phase relationships of phaser cells and (2) the observed temporal segregation of phaser cells according to phase-shift direction. In open-field phaser cell simulations, competitive learning embedded phase-code entrainment maps into the weights of downstream targets, including path integration networks. Bayesian phase decoding revealed error correction capable of resetting path integration at subsecond timescales. Our findings suggest that phaser cells may instantiate a subcortical theta-rhythmic loop of spatial feedback. We outline a framework in which location-dependent synchrony reconciles internal idiothetic processes with the allothetic reference points of sensory experience., Author summary Spatial cognition in mammals depends on position-related activity in the hippocampus and entorhinal cortex. Hippocampal place cells and entorhinal grid cells carry distinct maps as rodents move around. The grid cell map is thought to measure angles and distances from previous locations using path integration, a strategy of internally tracking self motion. However, path integration accumulates errors and must be ‘reset’ by external sensory cues. Allowing rats to explore an open arena, we recorded spiking neurons from areas interconnected with the entorhinal cortex, including subcortical structures and the hippocampus. Many of these subcortical regions help coordinate the hippocampal theta rhythm. Thus, we looked for spatial information in theta-rhythmic spiking and discovered ‘phaser cells’ in the lateral septum, which receives dense hippocampal input. Phaser cells encoded the rat’s position by shifting spike timing in symmetry with spatial changes in firing rate. We theorized that symmetric rate-phase coupling allows downstream networks to flexibly learn spatial patterns of synchrony. Using dynamical models and simulations, we showed that phaser cells may collectively transmit a fast, oscillatory reset signal. Our findings develop a new perspective on the temporal coding of space that may help disentangle competing models of path integration and cross-species differences in navigation.
- Published
- 2018
9. An attempt to fit all parameters of a dynamical recurrent neural network from sensory neural spiking data
- Author
-
Özgür R. Doruk and Kechen Zhang
- Subjects
Recurrent neural network ,Quantitative Biology::Neurons and Cognition ,Computer science ,business.industry ,Sensory system ,Artificial intelligence ,business - Abstract
A simulation based study on model fitting for sensory neurons from stimulus/response data is presented. The employed model is a continuous time recurrent neural network (CTRNN) which is a member of models with known universal approximation features. This feature of the recurrent dynamical neuron network models allow us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. This work will be a continuation of a previous study where the parameters associated with the sigmoidal gain functions are not taken into account. In this work, we will construct a similar framework but all parameters associated with the model are estimated. The stimulus data is generated by a Phased Cosine Fourier series having fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size and sample size are applied in order to examine the effect of stimulus to the identification process. Results are presented in tabular and graphical forms at the end of this text. In addition a comparison of the results with previous researches including will be presented.
- Published
- 2018
- Full Text
- View/download PDF
10. Analysis of an Attractor Neural Network’s Response to Conflicting External Inputs
- Author
-
Kathryn Hedrick and Kechen Zhang
- Subjects
0301 basic medicine ,Artificial neural network ,Computer science ,Research ,lcsh:Mathematics ,Neuroscience (miscellaneous) ,Relative strength ,lcsh:QA1-939 ,Topology ,Reduced model ,System a ,lcsh:RC321-571 ,Linear map ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Attractor ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,030217 neurology & neurosurgery ,Attractor network ,Attractor neural network - Abstract
The theory of attractor neural networks has been influential in our understanding of the neural processes underlying spatial, declarative, and episodic memory. Many theoretical studies focus on the inherent properties of an attractor, such as its structure and capacity. Relatively little is known about how an attractor neural network responds to external inputs, which often carry conflicting information about a stimulus. In this paper we analyze the behavior of an attractor neural network driven by two conflicting external inputs. Our focus is on analyzing the emergent properties of the megamap model, a quasi-continuous attractor network in which place cells are flexibly recombined to represent a large spatial environment. In this model, the system shows a sharp transition from the winner-take-all mode, which is characteristic of standard continuous attractor neural networks, to a combinatorial mode in which the equilibrium activity pattern combines embedded attractor states in response to conflicting external inputs. We derive a numerical test for determining the operational mode of the system a priori. We then derive a linear transformation from the full megamap model with thousands of neurons to a reduced 2-unit model that has similar qualitative behavior. Our analysis of the reduced model and explicit expressions relating the parameters of the reduced model to the megamap elucidate the conditions under which the combinatorial mode emerges and the dynamics in each mode given the relative strength of the attractor network and the relative strength of the two conflicting inputs. Although we focus on a particular attractor network model, we describe a set of conditions under which our analysis can be applied to more general attractor neural networks.
- Published
- 2018
- Full Text
- View/download PDF
11. Clock-Generated Temporal Codes Determine Synaptic Plasticity to Control Sleep
- Author
-
Kechen Zhang, Masashi Tabuchi, Qili Liu, Mark N. Wu, Sha Liu, Joseph D. Monaco, Benjamin J. Bell, and Grace Duan
- Subjects
0301 basic medicine ,Potassium Channels ,Models, Neurological ,Action Potentials ,Plasticity ,Biology ,ENCODE ,Receptors, N-Methyl-D-Aspartate ,Synaptic Transmission ,Article ,General Biochemistry, Genetics and Molecular Biology ,Arousal ,03 medical and health sciences ,Potassium Channels, Calcium-Activated ,0302 clinical medicine ,Circadian Clocks ,Animals ,Drosophila Proteins ,RNA, Small Interfering ,Neurons ,Neuronal Plasticity ,Circadian Rhythm ,Optogenetics ,030104 developmental biology ,Synaptic plasticity ,Spike (software development) ,Drosophila ,RNA Interference ,Sodium-Potassium-Exchanging ATPase ,Neural coding ,Sleep ,Neuroscience ,030217 neurology & neurosurgery ,Coding (social sciences) ,Genetic screen ,Signal Transduction - Abstract
Neurons use two main schemes to encode information: rate coding (frequency of firing) and temporal coding (timing or pattern of firing). While the importance of rate coding is well-established, it remains controversial whether temporal codes alone are sufficient for controlling behavior. Moreover, the molecular mechanisms underlying the generation of specific temporal codes are enigmatic. Here, we show in Drosophila clock neurons that distinct temporal spike patterns, dissociated from changes in firing rate, encode time-dependent arousal and regulate sleep. From a large-scale genetic screen, we identify the molecular pathways mediating the circadian-dependent changes in ionic flux and spike morphology that rhythmically modulate spike timing. Remarkably, the daytime spiking pattern alone is sufficient to drive plasticity in downstream arousal neurons, leading to increased firing of these cells. These findings demonstrate a causal role for temporal coding in behavior and define a form of synaptic plasticity triggered solely by temporal spike patterns.
- Published
- 2018
12. Neural timing of stimulus events with microsecond precision
- Author
-
Torbjørn V. Ness, Silvio Macías, Gaute T. Einevoll, Jinhong Luo, Kechen Zhang, and Cynthia F. Moss
- Subjects
Male ,0301 basic medicine ,Inferior colliculus ,Auditory Pathways ,Physiology ,Echoes ,Social Sciences ,Action Potentials ,computer.software_genre ,0302 clinical medicine ,Animal Cells ,Chiroptera ,Bats ,Medicine and Health Sciences ,Psychology ,Biology (General) ,Audio signal processing ,Neurons ,Mammals ,Animal Behavior ,Physics ,General Neuroscience ,Eukaryota ,Single Neuron Function ,Electrophysiology ,Microsecond ,Vertebrates ,Physical Sciences ,Auditory Perception ,Evoked Potentials, Auditory ,Engineering and Technology ,Female ,Cellular Types ,General Agricultural and Biological Sciences ,Research Article ,QH301-705.5 ,Bioacoustics ,Models, Neurological ,Neurophysiology ,Human echolocation ,Stimulus (physiology) ,Biology ,Membrane Potential ,Biophysical Phenomena ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,Animals ,Computer Simulation ,Sound Localization ,Computational Neuroscience ,Behavior ,General Immunology and Microbiology ,Organisms ,Biology and Life Sciences ,Computational Biology ,Cell Biology ,Acoustics ,Signal Bandwidth ,Inferior Colliculi ,030104 developmental biology ,Acoustic Stimulation ,Cellular Neuroscience ,Echolocation ,Amniotes ,Signal Processing ,Time Perception ,Auditory localization ,Auditory Physiology ,Zoology ,Neuroscience ,computer ,030217 neurology & neurosurgery - Abstract
Temporal analysis of sound is fundamental to auditory processing throughout the animal kingdom. Echolocating bats are powerful models for investigating the underlying mechanisms of auditory temporal processing, as they show microsecond precision in discriminating the timing of acoustic events. However, the neural basis for microsecond auditory discrimination in bats has eluded researchers for decades. Combining extracellular recordings in the midbrain inferior colliculus (IC) and mathematical modeling, we show that microsecond precision in registering stimulus events emerges from synchronous neural firing, revealed through low-latency variability of stimulus-evoked extracellular field potentials (EFPs, 200–600 Hz). The temporal precision of the EFP increases with the number of neurons firing in synchrony. Moreover, there is a functional relationship between the temporal precision of the EFP and the spectrotemporal features of the echolocation calls. In addition, EFP can measure the time difference of simulated echolocation call–echo pairs with microsecond precision. We propose that synchronous firing of populations of neurons operates in diverse species to support temporal analysis for auditory localization and complex sound processing., Author summary We routinely rely on a stopwatch to precisely measure the time it takes for an athlete to reach the finish line. Without the assistance of such a timing device, our measurement of elapsed time becomes imprecise. By contrast, some animals, such as echolocating bats, naturally perform timing tasks with remarkable precision. Behavioral research has shown that echolocating bats can estimate the elapsed time between sonar cries and echo returns with a precision in the range of microseconds. However, the neural basis for such microsecond precision has remained a puzzle to scientists. Combining extracellular recordings in the bat’s inferior colliculus (IC)—a midbrain nucleus of the auditory pathway—and mathematical modeling, we show that microsecond precision in registering stimulus events emerges from synchronous neural firing. Our recordings revealed a low-latency variability of stimulus-evoked extracellular field potentials (EFPs), which, according to our mathematical modeling, was determined by the number of firing neurons and their synchrony. Moreover, the acoustic features of echolocation calls, such as signal duration and bandwidth, which the bat dynamically modulates during prey capture, also modulate the precision of EFPs. These findings have broad implications for understanding temporal analysis of acoustic signals in a wide range of auditory behaviors across the animal kingdom.
- Published
- 2018
13. Single nucleus analysis of the chromatin landscape in mouse forebrain development
- Author
-
Rongxin Fang, Diane E. Dickel, Hui Huang, Yanxiao Zhang, Yuan Zhao, Afzal, Axel Visel, Brandon C. Sos, Ramya Raviram, Kechen Zhang, Len A. Pennacchio, Bing Ren, Sebastian Preissl, Samantha Kuan, and David U. Gorkin
- Subjects
Genetics ,0303 health sciences ,education.field_of_study ,Cell type ,1.1 Normal biological development and functioning ,Human Genome ,Population ,Genomics ,Computational biology ,Biology ,Stem Cell Research ,Genome ,Chromatin ,03 medical and health sciences ,0302 clinical medicine ,Underpinning research ,Regulatory sequence ,Forebrain ,Generic health relevance ,education ,030217 neurology & neurosurgery ,ChIA-PET ,Biotechnology ,030304 developmental biology - Abstract
Genome-wide analysis of chromatin accessibility in primary tissues has uncovered millions of candidate regulatory sequences in the human and mouse genomes1–4. However, the heterogeneity of biological samples used in previous studies has prevented a precise understanding of the dynamic chromatin landscape in specific cell types. Here, we show that analysis of the transposase-accessible-chromatin in single nuclei isolated from frozen tissue samples can resolve cellular heterogeneity and delineate transcriptional regulatory sequences in the constituent cell types. Our strategy is based on a combinatorial barcoding assisted single cell assay for transposase-accessible chromatin5 and is optimized for nuclei from flash-frozen primary tissue samples (snATAC-seq). We used this method to examine the mouse forebrain at seven development stages and in adults. From snATAC-seq profiles of more than 15,000 high quality nuclei, we identify 20 distinct cell populations corresponding to major neuronal and non-neuronal cell-types in foetal and adult forebrains. We further define cell-type specific cis regulatory sequences and infer potential master transcriptional regulators of each cell population. Our results demonstrate the feasibility of a general approach for identifying cell-type-specific cis regulatory sequences in heterogeneous tissue samples, and provide a rich resource for understanding forebrain development in mammals.
- Published
- 2017
- Full Text
- View/download PDF
14. Information-theoretic interpretation of tuning curves for multiple motion directions
- Author
-
Xin Huang, Kechen Zhang, and Wentao Huang
- Subjects
FOS: Computer and information sciences ,Computer Science - Information Theory ,Population ,02 engineering and technology ,Quantitative Biology - Quantitative Methods ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Neural and Evolutionary Computing (cs.NE) ,Fisher information ,education ,Representation (mathematics) ,Quantitative Methods (q-bio.QM) ,Network model ,Mathematics ,education.field_of_study ,Quantitative Biology::Neurons and Cognition ,business.industry ,Information Theory (cs.IT) ,Computer Science - Neural and Evolutionary Computing ,Pattern recognition ,Mutual information ,Visualization ,Visual cortex ,medicine.anatomical_structure ,Quantitative Biology - Neurons and Cognition ,FOS: Biological sciences ,symbols ,020201 artificial intelligence & image processing ,Neurons and Cognition (q-bio.NC) ,Artificial intelligence ,business ,Neural coding ,Algorithm ,030217 neurology & neurosurgery - Abstract
We have developed an efficient information-maximization method for computing the optimal shapes of tuning curves of sensory neurons by optimizing the parameters of the underlying feedforward network model. When applied to the problem of population coding of visual motion with multiple directions, our method yields several types of tuning curves with both symmetric and asymmetric shapes that resemble what have been found in the visual cortex. Our result suggests that the diversity or heterogeneity of tuning curve shapes as observed in neurophysiological experiment might actually constitute an optimal population representation of visual motions with multiple components., Comment: The 51st Annual Conference on Information Sciences and Systems (CISS), 2017
- Published
- 2017
- Full Text
- View/download PDF
15. Universal conditions for exact path integration in neural systems
- Author
-
John B. Issa and Kechen Zhang
- Subjects
Multidisciplinary ,Artificial neural network ,Linear system ,Models, Theoretical ,Biological Sciences ,Topology ,Synaptic weight ,Control theory ,Dead reckoning ,Path integration ,Trajectory ,Animals ,Humans ,Nervous System Physiological Phenomena ,Set (psychology) ,Attractor network ,Mathematics - Abstract
Animals are capable of navigation even in the absence of prominent landmark cues. This behavioral demonstration of path integration is supported by the discovery of place cells and other neurons that show path-invariant response properties even in the dark. That is, under suitable conditions, the activity of these neurons depends primarily on the spatial location of the animal regardless of which trajectory it followed to reach that position. Although many models of path integration have been proposed, no known single theoretical framework can formally accommodate their diverse computational mechanisms. Here we derive a set of necessary and sufficient conditions for a general class of systems that performs exact path integration. These conditions include multiplicative modulation by velocity inputs and a path-invariance condition that limits the structure of connections in the underlying neural network. In particular, for a linear system to satisfy the path-invariance condition, the effective synaptic weight matrices under different velocities must commute. Our theory subsumes several existing exact path integration models as special cases. We use entorhinal grid cells as an example to demonstrate that our framework can provide useful guidance for finding unexpected solutions to the path integration problem. This framework may help constrain future experimental and modeling studies pertaining to a broad class of neural integration systems.
- Published
- 2012
- Full Text
- View/download PDF
16. Information-Theoretic Bounds and Approximations in Neural Population Coding
- Author
-
Kechen Zhang and Wentao Huang
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Science - Information Theory ,Cognitive Neuroscience ,Computation ,Models, Neurological ,Information Theory ,02 engineering and technology ,Information theory ,Article ,Machine Learning (cs.LG) ,03 medical and health sciences ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Applied mathematics ,Computer Simulation ,Neurons ,Computer simulation ,Dimensionality reduction ,Information Theory (cs.IT) ,020206 networking & telecommunications ,Mutual information ,Computer Science - Learning ,Convex optimization ,Algorithms ,030217 neurology & neurosurgery ,Curse of dimensionality ,Coding (social sciences) - Abstract
While Shannon's mutual information has widespread applications in many disciplines, for practical applications it is often difficult to calculate its value accurately for high-dimensional variables because of the curse of dimensionality. This article focuses on effective approximation methods for evaluating mutual information in the context of neural population coding. For large but finite neural populations, we derive several information-theoretic asymptotic bounds and approximation formulas that remain valid in high-dimensional spaces. We prove that optimizing the population density distribution based on these approximation formulas is a convex optimization problem that allows efficient numerical solutions. Numerical simulation results confirmed that our asymptotic formulas were highly accurate for approximating mutual information for large neural populations. In special cases, the approximation formulas are exactly equal to the true mutual information. We also discuss techniques of variable transformation and dimensionality reduction to facilitate computation of the approximations.
- Published
- 2016
- Full Text
- View/download PDF
17. Scale-Invariant Memory Representations Emerge from Moiré Interference between Grid Fields That Produce Theta Oscillations: A Computational Model
- Author
-
Hugh T. Blair, Kechen Zhang, and Adam C. Welday
- Subjects
Male ,Models, Neurological ,Population ,Place cell ,Basis function ,Topology ,Memory ,Animals ,Entorhinal Cortex ,Rats, Long-Evans ,Hexagonal lattice ,Theta Rhythm ,education ,Scaling ,Computer Science::Distributed, Parallel, and Cluster Computing ,Physics ,Communication ,education.field_of_study ,Quantitative Biology::Neurons and Cognition ,business.industry ,General Neuroscience ,Computational Biology ,Articles ,Moiré pattern ,Scale invariance ,Grid ,Rats ,business ,Moire Topography - Abstract
The dorsomedial entorhinal cortex (dMEC) of the rat brain contains a remarkable population of spatially tuned neurons called grid cells (Hafting et al., 2005). Each grid cell fires selectively at multiple spatial locations, which are geometrically arranged to form a hexagonal lattice that tiles the surface of the rat's environment. Here, we show that grid fields can combine with one another to form moiré interference patterns, referred to as “moiré grids,” that replicate the hexagonal lattice over an infinite range of spatial scales. We propose that dMEC grids are actually moiré grids formed by interference between much smaller “theta grids,” which are hypothesized to be the primary source of movement-related theta rhythm in the rat brain. The formation of moiré grids from theta grids obeys two scaling laws, referred to as the length and rotational scaling rules. The length scaling rule appears to account for firing properties of grid cells in layer II of dMEC, whereas the rotational scaling rule can better explain properties of layer III grid cells. Moiré grids built from theta grids can be combined to form yet larger grids and can also be used as basis functions to construct memory representations of spatial locations (place cells) or visual images. Memory representations built from moiré grids are automatically endowed with size invariance by the scaling properties of the moiré grids. We therefore propose that moiré interference between grid fields may constitute an important principle of neural computation underlying the construction of scale-invariant memory representations.
- Published
- 2007
- Full Text
- View/download PDF
18. A universal scaling law between gray matter and white matter of cerebral cortex
- Author
-
Terrence J. Sejnowski and Kechen Zhang
- Subjects
Cerebral Cortex ,Mammals ,Primates ,Physics ,Scaling law ,Multidisciplinary ,Neocortex ,Models, Neurological ,Anatomy ,Biological Sciences ,Models, Theoretical ,Orders of magnitude (volume) ,Gray (unit) ,Power law ,Axons ,White matter ,Nerve Fibers ,medicine.anatomical_structure ,Species Specificity ,Cerebral cortex ,Cortex (anatomy) ,medicine ,Animals ,Humans ,Neuroscience - Abstract
Neocortex, a new and rapidly evolving brain structure in mammals, has a similar layered architecture in species over a wide range of brain sizes. Larger brains require longer fibers to communicate between distant cortical areas; the volume of the white matter that contains long axons increases disproportionally faster than the volume of the gray matter that contains cell bodies, dendrites, and axons for local information processing, according to a power law. The theoretical analysis presented here shows how this remarkable anatomical regularity might arise naturally as a consequence of the local uniformity of the cortex and the requirement for compact arrangement of long axonal fibers. The predicted power law with an exponent of 4/3 minus a small correction for the thickness of the cortex accurately accounts for empirical data spanning several orders of magnitude in brain sizes for various mammalian species, including human and nonhuman primates.
- Published
- 2000
- Full Text
- View/download PDF
19. A Theory of Geometric Constraints on Neural Activity for Natural Three-Dimensional Movement
- Author
-
Terrence J. Sejnowski and Kechen Zhang
- Subjects
Computer science ,Movement ,Population ,Motion Perception ,Degrees of freedom (statistics) ,Spatial Behavior ,Angular velocity ,Topology ,Article ,Acceleration ,Orientation ,Orientation (geometry) ,Animals ,Humans ,education ,Neurons ,education.field_of_study ,Communication ,Quantitative Biology::Neurons and Cognition ,business.industry ,General Neuroscience ,Linear system ,Evoked Potentials, Motor ,Nonlinear system ,Linear Models ,business ,Rotation (mathematics) ,Mathematics - Abstract
Although the orientation of an arm in space or the static view of an object may be represented by a population of neurons in complex ways, how these variables change with movement often follows simple linear rules, reflecting the underlying geometric constraints in the physical world. A theoretical analysis is presented for how such constraints affect the average firing rates of sensory and motor neurons during natural movements with low degrees of freedom, such as a limb movement and rigid object motion. When applied to nonrigid reaching arm movements, the linear theory accounts for cosine directional tuning with linear speed modulation, predicts a curl-free spatial distribution of preferred directions, and also explains why the instantaneous motion of the hand can be recovered from the neural population activity. For three-dimensional motion of a rigid object, the theory predicts that, to a first approximation, the response of a sensory neuron should have a preferred translational direction and a preferred rotation axis in space, both with cosine tuning functions modulated multiplicatively by speed and angular speed, respectively. Some known tuning properties of motion-sensitive neurons follow as special cases. Acceleration tuning and nonlinear speed modulation are considered in an extension of the linear theory. This general approach provides a principled method to derive mechanism-insensitive neuronal properties by exploiting the inherently low dimensionality of natural movements.
- Published
- 1999
- Full Text
- View/download PDF
20. Neuronal Tuning: To Sharpen or Broaden?
- Author
-
Terrence J. Sejnowski and Kechen Zhang
- Subjects
Neurons ,education.field_of_study ,Quantitative Biology::Neurons and Cognition ,Artificial neural network ,Cognitive Neuroscience ,Models, Neurological ,Population ,Electrophysiology ,symbols.namesake ,Noise ,Arts and Humanities (miscellaneous) ,Dimension (vector space) ,Neuronal tuning ,symbols ,Probability distribution ,Artifacts ,Fisher information ,education ,Algorithm ,Mathematics ,Curse of dimensionality - Abstract
Sensory and motor variables are typically represented by a population of broadly tuned neurons. A coarser representation with broader tuning can often improve coding accuracy, but sometimes the accuracy may also improve with sharper tuning. The theoretical analysis here shows that the relationship between tuning width and accuracy depends crucially on the dimension of the encoded variable. A general rule is derived for how the Fisher information scales with the tuning width, regardless of the exact shape of the tuning function, the probability distribution of spikes, and allowing some correlated noise between neurons. These results demonstrate a universal dimensionality effect in neural population coding.
- Published
- 1999
- Full Text
- View/download PDF
21. Statistically Efficient Estimation Using Population Coding
- Author
-
Kechen Zhang, Peter E. Latham, Sophie Denève, and Alexandre Pouget
- Subjects
Neurons ,Likelihood Functions ,Models, Statistical ,Artificial neural network ,Computer science ,Cognitive Neuroscience ,Scalar (mathematics) ,Linear model ,Information theory ,ENCODE ,Nonlinear system ,Nonlinear Dynamics ,Arts and Humanities (miscellaneous) ,Statistics ,Linear Models ,Computer Simulation ,Neural Networks, Computer ,Neural coding ,Algorithm ,Coding (social sciences) - Abstract
Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient (the variance of the estimate is much larger than the smallest possible variance) or biologically implausible, like maximum likelihood. Moreover, these methods attempt to compute a scalar or vector estimate of the encoded variable. Neurons are faced with a similar estimation problem. They must read out the responses of the presynaptic neurons, but, by contrast, they typically encode the variable with a further population code rather than as a scalar. We show how a nonlinear recurrent network can be used to perform estimation in a near-optimal way while keeping the estimate in a coarse code format. This work suggests that lateral connections in the cortex may be involved in cleaning up uncorrelated noise among neurons representing similar variables.
- Published
- 1998
- Full Text
- View/download PDF
22. Spatial rate/phase codes provide landmark-based error correction in a temporal model of theta cells
- Author
-
Monaco, Joseph, H. Tad Blair, and Kechen Zhang
- Subjects
Quantitative Biology::Neurons and Cognition - Abstract
Location-Controlled Oscillator Model This repository contains the IPython Notebook files that were used to develop the location-controlled oscillator (LCO) model that I presented at the Society for Neuroscience 2014 meeting: J. D. Monaco, K. Zhang, and H. T. Blair. (2014) Spatial rate/phase codes provide landmark-based error correction in a temporal model of theta cells. Society for Neuroscience 2014, Washington, D.C. 752.07. Abstract The spatial firing patterns of place cells in hippocampus and grid cells in entorhinal cortex form a spatial representation that is stable during active navigation but also able to encode changes in external landmarks or environmental structure. One class of model that has been investigated as a possible mechanism for generating these spatial patterns relies on temporal synchronization between theta cells, which fire strongly with the septohippocampal theta rhythm (6–10 Hz) and are found throughout the hippocampal formation, that act as velocity-controlled oscillators. However, a critical problem for these models is that the oscillatory interference patterns that they generate become unstable in the presence of phase noise and errors in self-motion signals. Previous studies have proposed hybridizing temporal models with attractor network models or integrating environmental feedback from sensory cues. Preliminary data from subcortical regions in rats suggest that some theta cells exhibit spatially selective firing similar to hippocampal place fields or entorhinal/subicular boundary fields. These cells also demonstrate a consistent phase relationship across space, relative to ongoing hippocampal theta and to other simultaneously recorded cells, that is correlated with the firing rate at a given location. Inspired by this data, we present a novel synchronization model in which place cells or boundary-vector cells provide a stable, landmark-based excitatory input that drives a rate-to-phase mechanism to generate a population of cells that act as location-controlled oscillators. These cells fire preferentially at theta phases that are specific to a given location, determined by the presence of external landmarks. We show that these location-controlled oscillators provide a stable spatial reference that corrects phase errors in velocity-controlled oscillators, thus preventing the encoded position from drifting with respect to the environment and the actual position of the animal. Thus landmark-based rate/phase correlations in extrahippocampal areas may provide the sensory feedback required by temporal models of neural representations of space.
- Published
- 2014
- Full Text
- View/download PDF
23. Attractor dynamics of spatially correlated neural activity in the limbic system
- Author
-
Kechen Zhang and James J. Knierim
- Subjects
Quantitative Biology::Neurons and Cognition ,Plane (geometry) ,General Neuroscience ,Computation ,Dynamics (mechanics) ,Models, Neurological ,Place cell ,Brain ,Grid cell ,Article ,Nonlinear Sciences::Chaotic Dynamics ,Neural activity ,Limbic system ,medicine.anatomical_structure ,Nonlinear Dynamics ,Attractor ,medicine ,Limbic System ,Animals ,Neural Networks, Computer ,Psychology ,Neuroscience - Abstract
Attractor networks are a popular computational construct used to model different brain systems. These networks allow elegant computations that are thought to represent a number of aspects of brain function. Although there is good reason to believe that the brain displays attractor dynamics, it has proven difficult to test experimentally whether any particular attractor architecture resides in any particular brain circuit. We review models and experimental evidence for three systems in the rat brain that are presumed to be components of the rat's navigational and memory system. Head-direction cells have been modeled as a ring attractor, grid cells as a plane attractor, and place cells both as a plane attractor and as a point attractor. Whereas the models have proven to be extremely useful conceptual tools, the experimental evidence in their favor, although intriguing, is still mostly circumstantial.
- Published
- 2012
24. Emergence of Position-Independent Detectors of Sense of Rotation and Dilation with Hebbian Learning: An Analysis
- Author
-
Martin I. Sereno, Margaret E. Sereno, and Kechen Zhang
- Subjects
Hebbian theory ,Arts and Humanities (miscellaneous) ,Artificial neural network ,Cognitive Neuroscience ,Superior temporal area ,Detector ,Mathematical analysis ,Translational velocity ,Dilation (morphology) ,Algorithm ,Mathematics - Abstract
We previously demonstrated that it is possible to learn position-independent responses to rotation and dilation by filtering rotations and dilations with different centers through an input layer with MT-like speed and direction tuning curves and connecting them to an MST-like layer with simple Hebbian synapses (Sereno and Sereno 1991). By analyzing an idealized version of the network with broader, sinusoidal direction-tuning and linear speed-tuning, we show analytically that a Hebb rule trained with arbitrary rotation, dilation/contraction, and translation velocity fields yields units with weight fields that are a rotation plus a dilation or contraction field, and whose responses to a rotating or dilating/contracting disk are exactly position independent. Differences between the performance of this idealized model and our original model (and real MST neurons) are discussed.
- Published
- 1993
- Full Text
- View/download PDF
25. Ipsilateral double parathyroid adenoma and thyroid hemiagenesis
- Author
-
Ralph P. Tufano, M Kim, Kechen Zhang, Wojciech K. Mydlarz, and S T Micchelli
- Subjects
Thyroid nodules ,Adenoma ,Pathology ,medicine.medical_specialty ,endocrine system diseases ,Thyroid Gland ,Thyroid dysgenesis ,medicine ,Humans ,Parathyroid disease ,Radionuclide Imaging ,Parathyroid adenoma ,Ultrasonography ,Hyperparathyroidism ,business.industry ,Middle Aged ,medicine.disease ,Hyperparathyroidism, Primary ,Intrathyroidal Parathyroid ,Parathyroid Neoplasms ,Otorhinolaryngology ,Thyroid Dysgenesis ,Parathyroid disorder ,Female ,Radiology ,business ,Primary hyperparathyroidism - Abstract
Background/Aims: To describe a case of left thyroid dysgenesis, accompanied by ipsilateral double parathyroid adenomas in a setting of primary hyperparathyroidism, and to review the pertinent literature on the diagnosis of these rare clinical scenarios. Methods: Review of the English literature with addition of a case report. Results: Preoperative evaluation included both sestamibi and ultrasound evaluation of the neck. Fine-needle aspiration biopsies of what was thought to be two concerning thyroid nodules revealed potential double intrathyroidal parathyroid adenomas. Video-assisted exploration verified double parathyroid adenomas and revealed concomitant left thyroid lobe dysgenesis. Intact parathyroid hormone level returned to normal and a greater than 50% drop from baseline was achieved intraoperatively with subsequent long-term cure. Conclusions: Thyroid dysgenesis is a rare, poorly understood and potentially confusing variety of developmental anomalies, which can be associated with thyroid as well as parathyroid disease. Clinical diagnosis is highly dependent upon the clinician maintaining an index of suspicion for these anomalies, thorough physical examination and careful review of available imaging modalities, especially while investigating thyroid and parathyroid disorders.
- Published
- 2010
26. Mechanical models of Maxwell’s demon with noninvariant phase volume
- Author
-
Kechen Zhang and Kezhao Zhang
- Subjects
Isolated system ,Physics ,media_common.quotation_subject ,Phase space ,Time evolution ,Second law of thermodynamics ,Statistical mechanics ,Statistical physics ,Invariant (physics) ,Atomic and Molecular Physics, and Optics ,Force field (chemistry) ,media_common ,Maxwell's demon - Abstract
This paper is concerned with the dynamical basis of Maxwell's demon within the framework of classical mechanics. We show that the operation of the demon, whose effect is equivalent to exerting a velocity-dependent force on the gas molecules, can be modeled as a suitable force field without disobeying any laws in classical mechanics. An essential requirement for the models is that the phase-space volume should be noninvariant during time evolution. The necessity of the requirement can be established under general conditions by showing that (1) a mechanical device is able to violate the second law of thermodynamics if and only if it can be used to generate and sustain a robust momentum flow inside an isolated system, and (2) no systems with invariant phase volume are able to support such a flow. The invariance of phase volume appears as an independent factor responsible for the validity of the second law of thermodynamics. When this requirement is removed, explicit mechanical models of Maxwell's demon can exist.
- Published
- 1992
- Full Text
- View/download PDF
27. How to modify a neural network gradually without changing its input-output functionality
- Author
-
Christopher DiMattina and Kechen Zhang
- Subjects
Central Nervous System ,Neurons ,Logarithm ,Computer simulation ,Artificial neural network ,Cognitive Neuroscience ,Models, Neurological ,Action Potentials ,Mathematical Concepts ,Synaptic Transmission ,Exponential function ,Models of neural computation ,Arts and Humanities (miscellaneous) ,Neural Pathways ,Synapses ,Biological neural network ,Neural Networks, Computer ,Nerve Net ,Stochastic neural network ,Power function ,Algorithm ,Mathematical Computing ,Algorithms ,Mathematics - Abstract
It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.
- Published
- 2009
28. Conversion of a phase- to a rate-coded position signal by a three-stage model of theta cells, grid cells, and place cells
- Author
-
Hugh T. Blair, Kishan Gupta, and Kechen Zhang
- Subjects
Vertex (graph theory) ,Cognitive Neuroscience ,Phase (waves) ,Place cell ,Normal Distribution ,Action Potentials ,Hippocampus ,Synaptic Transmission ,Article ,Position (vector) ,Biological Clocks ,Path integration ,Neural Pathways ,Animals ,Entorhinal Cortex ,Computer Simulation ,Head direction cells ,Theta Rhythm ,Network model ,Physics ,Neurons ,Quantitative Biology::Neurons and Cognition ,Grid ,Rats ,Space Perception ,Nerve Net ,Neuroscience - Abstract
As a rat navigates through a familiar environment, its posi- tion in space is encoded by firing rates of place cells and grid cells. Oscil- latory interference models propose that this positional firing rate code is derived from a phase code, which stores the rat's position as a pattern of phase angles between velocity-modulated theta oscillations. Here we describe a three-stage network model, which formalizes the computational steps that are necessary for converting phase-coded position signals (repre- sented by theta oscillations) into rate-coded position signals (represented by grid cells and place cells). The first stage of the model proposes that the phase-coded position signal is stored and updated by a bank of ring attractors, like those that have previously been hypothesized to perform angular path integration in the head-direction cell system. We show ana- lytically how ring attractors can serve as central pattern generators for producing velocity-modulated theta oscillations, and we propose that such ring attractors may reside in subcortical areas where hippocampal theta rhythm is known to originate. In the second stage of the model, grid fields are formed by oscillatory interference between theta cells residing in dif- ferent (but not the same) ring attractors. The model's third stage assumes that hippocampal neurons generate Gaussian place fields by computing weighted sums of inputs from a basis set of many grid fields. Here we show that under this assumption, the spatial frequency spectrum of the Gaussian place field defines the vertex spacings of grid cells that must provide input to the place cell. This analysis generates a testable predic- tion that grid cells with large vertex spacings should send projections to the entire hippocampus, whereas grid cells with smaller vertex spacings may project more selectively to the dorsal hippocampus, where place fields are smallest. V C 2008 Wiley-Liss, Inc.
- Published
- 2008
29. How optimal stimuli for sensory neurons are constrained by network architecture
- Author
-
Kechen Zhang and Christopher DiMattina
- Subjects
Central Nervous System ,Cognitive Neuroscience ,Models, Neurological ,Sensation ,Action Potentials ,Sensory system ,Stimulus (physiology) ,Synaptic Transmission ,Models of neural computation ,Arts and Humanities (miscellaneous) ,Control theory ,medicine ,Animals ,Humans ,Computer Simulation ,Neurons, Afferent ,Artificial neural network ,Feed forward ,Sensory neuron ,medicine.anatomical_structure ,Recurrent neural network ,Synapses ,Neuron ,Neural Networks, Computer ,Nerve Net ,Psychology ,Neuroscience ,Algorithms ,Photic Stimulation - Abstract
Identifying the optimal stimuli for a sensory neuron is often a difficult process involving trial and error. By analyzing the relationship between stimuli and responses in feedforward and stable recurrent neural network models, we find that the stimulus yielding the maximum firing rate response always lies on the topological boundary of the collection of all allowable stimuli, provided that individual neurons have increasing input-output relations or gain functions and that the synaptic connections are convergent between layers with nondegenerate weight matrices. This result suggests that in neurophysiological experiments under these conditions, only stimuli on the boundary need to be tested in order to maximize the response, thereby potentially reducing the number of trials needed for finding the most effective stimuli. Even when the gain functions allow firing rate cutoff or saturation, a peak still cannot exist in the stimulus-response relation in the sense that moving away from the optimum stimulus always reduces the response. We further demonstrate that the condition for nondegenerate synaptic connections also implies that proper stimuli can independently perturb the activities of all neurons in the same layer. One example of this type of manipulation is changing the activity of a single neuron in a given processing layer while keeping that of all others constant. Such stimulus perturbations might help experimentally isolate the interactions of selected neurons within a network.
- Published
- 2007
30. Efficient implementations of the adaptive PSI procedure for estimating multi-dimensional psychometric functions
- Author
-
Kechen Zhang and Christopher DiMattina
- Subjects
Ophthalmology ,Computer science ,Multi dimensional ,Implementation ,Sensory Systems ,Computational science - Published
- 2015
- Full Text
- View/download PDF
31. Anti-Hebbian synapses as a linear equation solver
- Author
-
Giorgio Ganis, Kechen Zhang, and Martin I. Sereno
- Subjects
Hebbian theory ,Theoretical computer science ,Generalized inverse ,Quantitative Biology::Neurons and Cognition ,Artificial neural network ,Iterative method ,Applied mathematics ,Computational problem ,System of linear equations ,Moore–Penrose pseudoinverse ,Linear equation ,Mathematics - Abstract
It is known that Hebbian synapses, with appropriate weight normalization, extract the first principal component of the input patterns. Anti-Hebb rules have been used in combination with Hebb rules to extract additional principal components or generate sparse codes. Here we show that the simple anti-Hebbian synapses alone can support an important computational function: solving simultaneous linear equations. During repetitive learning with a simple anti-Hebb rule, the weights onto an output unit always converge to the exact solution of the linear equations whose coefficients correspond to the input patterns and whose constant terms correspond to the biases, provided that the solution exists. If there are more equations than unknowns and no solution exists, the weights approach the values obtained by using the Moore-Penrose generalized inverse (pseudoinverse). No explicit matrix inversion is involved and there is no need to normalize weights. Mathematically, the anti-Hebb rule may be regarded as an iterative algorithm for learning a special case of the linear associative mapping. Since solving systems of linear equations is a very basic computational problem to which many other problems are often reduced, our interpretation suggests a potentially general computational role for the anti-Hebbian synapses and a certain type of long-term depression (LTD).
- Published
- 2002
- Full Text
- View/download PDF
32. Chapter 21 Accuracy and learning in neuronal populations
- Author
-
Kechen Zhang and Terrence J. Sejnowski
- Subjects
education.field_of_study ,Quantitative Biology::Neurons and Cognition ,Logarithm ,Computer science ,business.industry ,Bayesian probability ,Population ,Pattern recognition ,Mutual information ,symbols.namesake ,Bayes' theorem ,Hebbian theory ,symbols ,Spike (software development) ,Artificial intelligence ,Fisher information ,business ,education - Abstract
Publisher Summary This chapter discusses the accuracy and learning in neuronal populations. The information about various sensory and motor variables contained in neuronal spike trains is quantified by either Shannon mutual information or Fisher information. The accuracy of encoding and decoding by a population of neurons as described by Fisher information has some general properties, including a universal scaling law with respect to the width of the tuning functions. The theoretical accuracy for reading information from population activity can be reached, in principle, by Bayesian reconstruction, which can be simplified by exploiting Poisson spike statistics. The Bayesian method can be implemented by a feedforward network, where the desired synaptic strength can be established by a Hebbian learning rule that is proportional to the logarithm of the pre-synaptic firing rate, suggesting that the method might be potentially relevant to biological systems.
- Published
- 2001
- Full Text
- View/download PDF
33. Interpreting neuronal population activity by reconstruction: unified framework with application to hippocampal place cells
- Author
-
Iris Ginzburg, Kechen Zhang, Bruce L. McNaughton, and Terrence J. Sejnowski
- Subjects
Mean squared error ,Physiology ,Computer science ,Spike train ,Population ,Models, Neurological ,Action Potentials ,Basis function ,Hippocampus ,Synaptic Transmission ,Running ,Memory ,Animals ,Poisson Distribution ,education ,Maze Learning ,Neurons ,education.field_of_study ,Brain Mapping ,General Neuroscience ,Template matching ,Cognitive neuroscience of visual object recognition ,Probabilistic logic ,Bayes Theorem ,Rats ,Space Perception ,Computational problem ,Nerve Net ,Algorithm ,Algorithms - Abstract
Zhang, Kechen, Iris Ginzburg, Bruce L. McNaughton, and Terrence J. Sejnowski. Interpreting neuronal population activity by reconstruction: unified framework with application to hippocampal place cells. J. Neurophysiol. 79: 1017–1044, 1998. Physical variables such as the orientation of a line in the visual field or the location of the body in space are coded as activity levels in populations of neurons. Reconstruction or decoding is an inverse problem in which the physical variables are estimated from observed neural activity. Reconstruction is useful first in quantifying how much information about the physical variables is present in the population and, second, in providing insight into how the brain might use distributed representations in solving related computational problems such as visual object recognition and spatial navigation. Two classes of reconstruction methods, namely, probabilistic or Bayesian methods and basis function methods, are discussed. They include important existing methods as special cases, such as population vector coding, optimal linear estimation, and template matching. As a representative example for the reconstruction problem, different methods were applied to multi-electrode spike train data from hippocampal place cells in freely moving rats. The reconstruction accuracy of the trajectories of the rats was compared for the different methods. Bayesian methods were especially accurate when a continuity constraint was enforced, and the best errors were within a factor of two of the information-theoretic limit on how accurate any reconstruction can be and were comparable with the intrinsic experimental errors in position tracking. In addition, the reconstruction analysis uncovered some interesting aspects of place cell activity, such as the tendency for erratic jumps of the reconstructed trajectory when the animal stopped running. In general, the theoretical values of the minimal achievable reconstruction errors quantify how accurately a physical variable is encoded in the neuronal population in the sense of mean square error, regardless of the method used for reading out the information. One related result is that the theoretical accuracy is independent of the width of the Gaussian tuning function only in two dimensions. Finally, all the reconstruction methods considered in this paper can be implemented by a unified neural network architecture, which the brain feasibly could use to solve related problems.
- Published
- 1998
34. Spatial Orientation and Dynamics of Papez Circuit
- Author
-
Kechen Zhang
- Subjects
Reduction (complexity) ,Orientation (vector space) ,medicine.anatomical_structure ,Geometric phase ,Computer science ,Neural substrate ,Group (mathematics) ,Papez circuit ,medicine ,Representation (systemics) ,Simple cell ,Biological system - Abstract
Although Papez circuit was initially proposed as a neural substrate for emotions, recent findings of head-direction cells demonstrate that some parts of the structure are involved with representation of spatial orientation. We discuss several problems related to the dynamics of the spatial representation, including (1) reduction of the dynamics of coupled cell groups to that of a simple cell group, in which the formulation of the shift mechanism has been determined uniquely, (2) the relation between dynamic behaviors of coupled cell groups and lesion results, and (3) the limitation of one-dimensional spatial representation in geometric phase problem.
- Published
- 1997
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.