62 results on '"Sommer FT"'
Search Results
2. International Neuroscience Initiatives through the Lens of High-Performance Computing
- Author
-
Bouchard, KE, Bouchard, KE, Aimone, JB, Chun, M, Dean, T, Denker, M, Diesmann, M, Donofrio, DD, Frank, LM, Kasthuri, N, Koch, C, Rubel, O, Simon, HD, Sommer, FT, Prabhat, Bouchard, KE, Bouchard, KE, Aimone, JB, Chun, M, Dean, T, Denker, M, Diesmann, M, Donofrio, DD, Frank, LM, Kasthuri, N, Koch, C, Rubel, O, Simon, HD, Sommer, FT, and Prabhat
- Abstract
Neuroscience initiatives aim to develop new technologies and tools to measure and manipulate neuronal circuits. To deal with the massive amounts of data generated by these tools, the authors envision the co-location of open data repositories in standardized formats together with high-performance computing hardware utilizing open source optimized analysis codes.
- Published
- 2018
3. Computing With Residue Numbers in High-Dimensional Representation.
- Author
-
Kymn CJ, Kleyko D, Frady EP, Bybee C, Kanerva P, Sommer FT, and Olshausen BA
- Abstract
We introduce residue hyperdimensional computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using resources that scale only logarithmically with the range, a vast improvement over previous methods. It also exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data., (© 2024 Massachusetts Institute of Technology.)
- Published
- 2024
- Full Text
- View/download PDF
4. Perceptron Theory Can Predict the Accuracy of Neural Networks.
- Author
-
Kleyko D, Rosato A, Frady EP, Panella M, and Sommer FT
- Abstract
Multilayer neural networks set the current state of the art for many technical classification problems. But, these networks are still, essentially, black boxes in terms of analyzing them and predicting their performance. Here, we develop a statistical theory for the one-layer perceptron and show that it can predict performances of a surprisingly large variety of neural networks with different architectures. A general theory of classification with perceptrons is developed by generalizing an existing theory for analyzing reservoir computing models and connectionist models for symbolic reasoning known as vector symbolic architectures. Our statistical theory offers three formulas leveraging the signal statistics with increasing detail. The formulas are analytically intractable, but can be evaluated numerically. The description level that captures maximum details requires stochastic sampling methods. Depending on the network model, the simpler formulas already yield high prediction accuracy. The quality of the theory predictions is assessed in three experimental settings, a memorization task for echo state networks (ESNs) from reservoir computing literature, a collection of classification datasets for shallow randomly connected networks, and the ImageNet dataset for deep convolutional neural networks. We find that the second description level of the perceptron theory can predict the performance of types of ESNs, which could not be described previously. Furthermore, the theory can predict deep multilayer neural networks by being applied to their output layer. While other methods for prediction of neural networks performance commonly require to train an estimator model, the proposed theory requires only the first two moments of the distribution of the postsynaptic sums in the output neurons. Moreover, the perceptron theory compares favorably to other methods that do not rely on training an estimator model.
- Published
- 2024
- Full Text
- View/download PDF
5. Binding in hippocampal-entorhinal circuits enables compositionality in cognitive maps.
- Author
-
Kymn CJ, Mazelet S, Thomas A, Kleyko D, Frady EP, Sommer FT, and Olshausen BA
- Abstract
We propose a normative model for spatial representation in the hippocampal formation that combines optimality principles, such as maximizing coding range and spatial information per neuron, with an algebraic framework for computing in distributed representation. Spatial position is encoded in a residue number system, with individual residues represented by high-dimensional, complex-valued vectors. These are composed into a single vector representing position by a similarity-preserving, conjunctive vector-binding operation. Self-consistency between the representations of the overall position and of the individual residues is enforced by a modular attractor network whose modules correspond to the grid cell modules in entorhinal cortex. The vector binding operation can also associate different contexts to spatial representations, yielding a model for entorhinal cortex and hippocampus. We show that the model achieves normative desiderata including superlinear scaling of patterns with dimension, robust error correction, and hexagonal, carry-free encoding of spatial position. These properties in turn enable robust path integration and association with sensory inputs. More generally, the model formalizes how compositional computations could occur in the hippocampal formation and leads to testable experimental predictions.
- Published
- 2024
6. News without the buzz: reading out weak theta rhythms in the hippocampus.
- Author
-
Agarwal G, Lustig B, Akera S, Pastalkova E, Lee AK, and Sommer FT
- Abstract
Local field potentials (LFPs) reflect the collective dynamics of neural populations, yet their exact relationship to neural codes remains unknown
1 . One notable exception is the theta rhythm of the rodent hippocampus, which seems to provide a reference clock to decode the animal's position from spatiotemporal patterns of neuronal spiking2 or LFPs3 . But when the animal stops, theta becomes irregular4 , potentially indicating the breakdown of temporal coding by neural populations. Here we show that no such breakdown occurs, introducing an artificial neural network that can recover position-tuned rhythmic patterns (pThetas) without relying on the more prominent theta rhythm as a reference clock. pTheta and theta preferentially correlate with place cell and interneuron spiking, respectively. When rats forage in an open field, pTheta is jointly tuned to position and head orientation, a property not seen in individual place cells but expected to emerge from place cell sequences5 . Our work demonstrates that weak and intermittent oscillations, as seen in many brain regions and species, can carry behavioral information commensurate with population spike codes., Competing Interests: Competing interests The authors declare no competing interests.- Published
- 2023
- Full Text
- View/download PDF
7. Computing with Residue Numbers in High-Dimensional Representation.
- Author
-
Kymn CJ, Kleyko D, Frady EP, Bybee C, Kanerva P, Sommer FT, and Olshausen BA
- Abstract
We introduce Residue Hyperdimensional Computing , a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using vastly fewer resources than previous methods, and it exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.
- Published
- 2023
8. Efficient optimization with higher-order ising machines.
- Author
-
Bybee C, Kleyko D, Nikonov DE, Khosrowshahi A, Olshausen BA, and Sommer FT
- Abstract
A prominent approach to solving combinatorial optimization problems on parallel hardware is Ising machines, i.e., hardware implementations of networks of interacting binary spin variables. Most Ising machines leverage second-order interactions although important classes of optimization problems, such as satisfiability problems, map more seamlessly to Ising networks with higher-order interactions. Here, we demonstrate that higher-order Ising machines can solve satisfiability problems more resource-efficiently in terms of the number of spin variables and their connections when compared to traditional second-order Ising machines. Further, our results show on a benchmark dataset of Boolean k-satisfiability problems that higher-order Ising machines implemented with coupled oscillators rapidly find solutions that are better than second-order Ising machines, thus, improving the current state-of-the-art for Ising machines., (© 2023. Springer Nature Limited.)
- Published
- 2023
- Full Text
- View/download PDF
9. Learning Energy-Based Models in High-Dimensional Spaces with Multiscale Denoising-Score Matching.
- Author
-
Li Z, Chen Y, and Sommer FT
- Abstract
Energy-based models (EBMs) assign an unnormalized log probability to data samples. This functionality has a variety of applications, such as sample synthesis, data denoising, sample restoration, outlier detection, Bayesian reasoning and many more. But, the training of EBMs using standard maximum likelihood is extremely slow because it requires sampling from the model distribution. Score matching potentially alleviates this problem. In particular, denoising-score matching has been successfully used to train EBMs. Using noisy data samples with one fixed noise level, these models learn fast and yield good results in data denoising. However, demonstrations of such models in the high-quality sample synthesis of high-dimensional data were lacking. Recently, a paper showed that a generative model trained by denoising-score matching accomplishes excellent sample synthesis when trained with data samples corrupted with multiple levels of noise. Here we provide an analysis and empirical evidence showing that training with multiple noise levels is necessary when the data dimension is high. Leveraging this insight, we propose a novel EBM trained with multiscale denoising-score matching. Our model exhibits a data-generation performance comparable to state-of-the-art techniques such as GANs and sets a new baseline for EBMs. The proposed model also provides density information and performs well on an image-inpainting task.
- Published
- 2023
- Full Text
- View/download PDF
10. Local interneurons in the murine visual thalamus have diverse receptive fields and can provide feature selective inhibition to relay cells.
- Author
-
Gorin AS, Miao Y, Ahn S, Suresh V, Su Y, Ciftcioglu UM, Sommer FT, and Hirsch JA
- Abstract
By influencing the type and quality of information that relay cells transmit, local interneurons in thalamus have a powerful impact on cortex. To define the sensory features that these inhibitory neurons encode, we mapped receptive fields of optogenetically identified cells in the murine dorsolateral geniculate nucleus. Although few in number, local interneurons had diverse types of receptive fields, like their counterpart relay cells. This result differs markedly from visual cortex, where inhibitory cells are typically less selective than excitatory cells. To explore how thalamic interneurons might converge on relay cells, we took a computational approach. Using an evolutionary algorithm to search through a library of interneuron models generated from our results, we show that aggregated output from different groups of local interneurons can simulate the inhibitory component of the relay cell's receptive field. Thus, our work provides proof-of-concept that groups of diverse interneurons can supply feature-specific inhibition to relay cells.
- Published
- 2023
- Full Text
- View/download PDF
11. Efficient Decoding of Compositional Structure in Holistic Representations.
- Author
-
Kleyko D, Bybee C, Huang PC, Kymn CJ, Olshausen BA, Frady EP, and Sommer FT
- Abstract
We investigate the task of retrieving information from compositional distributed representations formed by hyperdimensional computing/vector symbolic architectures and present novel techniques that achieve new information rate bounds. First, we provide an overview of the decoding techniques that can be used to approach the retrieval task. The techniques are categorized into four groups. We then evaluate the considered techniques in several settings that involve, for example, inclusion of external noise and storage elements with reduced precision. In particular, we find that the decoding techniques from the sparse coding and compressed sensing literature (rarely used for hyperdimensional computing/vector symbolic architectures) are also well suited for decoding information from the compositional distributed representations. Combining these decoding techniques with interference cancellation ideas from communications improves previously reported bounds (Hersche et al., 2021) of the information rate of the distributed representations from 1.20 to 1.40 bits per dimension for smaller codebooks and from 0.60 to 1.26 bits per dimension for larger codebooks., (© 2023 Massachusetts Institute of Technology.)
- Published
- 2023
- Full Text
- View/download PDF
12. Variable Binding for Sparse Distributed Representations: Theory and Applications.
- Author
-
Frady EP, Kleyko D, and Sommer FT
- Subjects
- Brain, Problem Solving, Neurons physiology, Neural Networks, Computer, Cognition
- Abstract
Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs). In contrast to other proposals for variable binding, the binding operation in VSAs is dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding a combinatoric expansion of dimensionality. Classical VSAs encode symbols by dense randomized vectors, in which information is distributed throughout the entire neuron population. By contrast, in the brain, features are encoded more locally, by the activity of single neurons or small groups of neurons, often forming sparse vectors of neural activation. Following Laiho et al. (2015), we explore symbolic reasoning with a special case of sparse distributed representations. Using techniques from compressed sensing, we first show that variable binding in classical VSAs is mathematically equivalent to tensor product binding between sparse feature vectors, another well-known binding operation which increases dimensionality. This theoretical result motivates us to study two dimensionality-preserving binding methods that include a reduction of the tensor matrix into a single sparse vector. One binding method for general sparse vectors uses random projections, the other, block-local circular convolution, is defined for sparse vectors with block structure, sparse block-codes. Our experiments reveal that block-local circular convolution binding has ideal properties, whereas random projection based binding also works, but is lossy. We demonstrate in example applications that a VSA with block-local circular convolution and sparse block-codes reaches similar performance as classical VSAs. Finally, we discuss our results in the context of neuroscience and neural networks.
- Published
- 2023
- Full Text
- View/download PDF
13. Vector Symbolic Architectures as a Computing Framework for Emerging Hardware.
- Author
-
Kleyko D, Davies M, Frady EP, Kanerva P, Kent SJ, Olshausen BA, Osipov E, Rabaey JM, Rachkovskij DA, Rahimi A, and Sommer FT
- Abstract
This article reviews recent progress in the development of the computing framework Vector Symbolic Architectures (also known as Hyperdimensional Computing). This framework is well suited for implementation in stochastic, emerging hardware and it naturally expresses the types of cognitive operations required for Artificial Intelligence (AI). We demonstrate in this article that the field-like algebraic structure of Vector Symbolic Architectures offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant to modern computing. In addition, we illustrate the distinguishing feature of Vector Symbolic Architectures, "computing in superposition," which sets it apart from conventional computing. It also opens the door to efficient solutions to the difficult combinatorial search problems inherent in AI applications. We sketch ways of demonstrating that Vector Symbolic Architectures are computationally universal. We see them acting as a framework for computing with distributed representations that can play a role of an abstraction layer for emerging computing hardware. This article serves as a reference for computer architects by illustrating the philosophy behind Vector Symbolic Architectures, techniques of distributed computing with them, and their relevance to emerging computing hardware, such as neuromorphic computing.
- Published
- 2022
14. Cellular Automata Can Reduce Memory Requirements of Collective-State Computing.
- Author
-
Kleyko D, Frady EP, and Sommer FT
- Abstract
Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space-time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns-rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory.
- Published
- 2022
- Full Text
- View/download PDF
15. Is Neuroscience FAIR? A Call for Collaborative Standardisation of Neuroscience Data.
- Author
-
Poline JB, Kennedy DN, Sommer FT, Ascoli GA, Van Essen DC, Ferguson AR, Grethe JS, Hawrylycz MJ, Thompson PM, Poldrack RA, Ghosh SS, Keator DB, Athey TL, Vogelstein JT, Mayberg HS, and Martone ME
- Subjects
- Neurosciences, Data Collection
- Abstract
In this perspective article, we consider the critical issue of data and other research object standardisation and, specifically, how international collaboration, and organizations such as the International Neuroinformatics Coordinating Facility (INCF) can encourage that emerging neuroscience data be Findable, Accessible, Interoperable, and Reusable (FAIR). As neuroscientists engaged in the sharing and integration of multi-modal and multiscale data, we see the current insufficiency of standards as a major impediment in the Interoperability and Reusability of research results. We call for increased international collaborative standardisation of neuroscience data to foster integration and efficient reuse of research objects., (© 2021. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
16. Consciousness is supported by near-critical slow cortical electrodynamics.
- Author
-
Toker D, Pappas I, Lendner JD, Frohlich J, Mateos DM, Muthukumaraswamy S, Carhart-Harris R, Paff M, Vespa PM, Monti MM, Sommer FT, Knight RT, and D'Esposito M
- Subjects
- Animals, Brain Mapping, Humans, Cerebral Cortex physiology, Consciousness physiology, Electrophysiological Phenomena
- Abstract
Mounting evidence suggests that during conscious states, the electrodynamics of the cortex are poised near a critical point or phase transition and that this near-critical behavior supports the vast flow of information through cortical networks during conscious states. Here, we empirically identify a mathematically specific critical point near which waking cortical oscillatory dynamics operate, which is known as the edge-of-chaos critical point, or the boundary between stability and chaos. We do so by applying the recently developed modified 0-1 chaos test to electrocorticography (ECoG) and magnetoencephalography (MEG) recordings from the cortices of humans and macaques across normal waking, generalized seizure, anesthesia, and psychedelic states. Our evidence suggests that cortical information processing is disrupted during unconscious states because of a transition of low-frequency cortical electric oscillations away from this critical point; conversely, we show that psychedelics may increase the information richness of cortical activity by tuning low-frequency cortical oscillations closer to this critical point. Finally, we analyze clinical electroencephalography (EEG) recordings from patients with disorders of consciousness (DOC) and show that assessing the proximity of slow cortical oscillatory electrodynamics to the edge-of-chaos critical point may be useful as an index of consciousness in the clinical setting., Competing Interests: The authors declare no competing interest., (Copyright © 2022 the Author(s). Published by PNAS.)
- Published
- 2022
- Full Text
- View/download PDF
17. A Neural Network MCMC Sampler That Maximizes Proposal Entropy.
- Author
-
Li Z, Chen Y, and Sommer FT
- Abstract
Markov Chain Monte Carlo (MCMC) methods sample from unnormalized probability distributions and offer guarantees of exact sampling. However, in the continuous case, unfavorable geometry of the target distribution can greatly limit the efficiency of MCMC methods. Augmenting samplers with neural networks can potentially improve their efficiency. Previous neural network-based samplers were trained with objectives that either did not explicitly encourage exploration, or contained a term that encouraged exploration but only for well structured distributions. Here we propose to maximize proposal entropy for adapting the proposal to distributions of any shape. To optimize proposal entropy directly, we devised a neural network MCMC sampler that has a flexible and tractable proposal distribution. Specifically, our network architecture utilizes the gradient of the target distribution for generating proposals. Our model achieved significantly higher efficiency than previous neural network MCMC techniques in a variety of sampling tasks, sometimes by more than an order magnitude. Further, the sampler was demonstrated through the training of a convergent energy-based model of natural images. The adaptive sampler achieved unbiased sampling with significantly higher proposal entropy than a Langevin dynamics sample. The trained sampler also achieved better sample quality.
- Published
- 2021
- Full Text
- View/download PDF
18. Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods.
- Author
-
Kent SJ, Frady EP, Sommer FT, and Olshausen BA
- Subjects
- Animals, Humans, Brain physiology, Cognition physiology, Neural Networks, Computer
- Abstract
We develop theoretical foundations of resonator networks, a new type of recurrent neural network introduced in Frady, Kent, Olshausen, and Sommer (2020), a companion article in this issue, to solve a high-dimensional vector factorization problem arising in Vector Symbolic Architectures. Given a composite vector formed by the Hadamard product between a discrete set of high-dimensional vectors, a resonator network can efficiently decompose the composite into these factors. We compare the performance of resonator networks against optimization-based methods, including Alternating Least Squares and several gradient-based algorithms, showing that resonator networks are superior in several important ways. This advantage is achieved by leveraging a combination of nonlinear dynamics and searching in superposition, by which estimates of the correct solution are formed from a weighted superposition of all possible solutions. While the alternative methods also search in superposition, the dynamics of resonator networks allow them to strike a more effective balance between exploring the solution space and exploiting local information to drive the network toward probable solutions. Resonator networks are not guaranteed to converge, but within a particular regime they almost always do. In exchange for relaxing the guarantee of global convergence, resonator networks are dramatically more effective at finding factorizations than all alternative approaches considered.
- Published
- 2020
- Full Text
- View/download PDF
19. Resonator Networks, 1: An Efficient Solution for Factoring High-Dimensional, Distributed Representations of Data Structures.
- Author
-
Frady EP, Kent SJ, Olshausen BA, and Sommer FT
- Subjects
- Animals, Humans, Brain physiology, Cognition physiology, Neural Networks, Computer
- Abstract
The ability to encode and manipulate data structures with distributed neural representations could qualitatively enhance the capabilities of traditional neural networks by supporting rule-based symbolic reasoning, a central property of cognition. Here we show how this may be accomplished within the framework of Vector Symbolic Architectures (VSAs) (Plate, 1991; Gayler, 1998; Kanerva, 1996), whereby data structures are encoded by combining high-dimensional vectors with operations that together form an algebra on the space of distributed representations. In particular, we propose an efficient solution to a hard combinatorial search problem that arises when decoding elements of a VSA data structure: the factorization of products of multiple codevectors. Our proposed algorithm, called a resonator network, is a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion. We show in two examples-parsing of a tree-like data structure and parsing of a visual scene-how the factorization problem arises and how the resonator network can solve it. More broadly, resonator networks open the possibility of applying VSAs to myriad artificial intelligence problems in real-world domains. The companion article in this issue (Kent, Frady, Sommer, & Olshausen, 2020) presents a rigorous analysis and evaluation of the performance of resonator networks, showing it outperforms alternative approaches.
- Published
- 2020
- Full Text
- View/download PDF
20. NWB Query Engines: Tools to Search Data Stored in Neurodata Without Borders Format.
- Author
-
Ježek P, Teeters JL, and Sommer FT
- Abstract
The Neurodata Without Borders (abbreviation NWB) format is a current technology for storing neurophysiology data along with the associated metadata. Data stored in the format is organized into separate HDF5 files, each file usually storing the data associated with a single recording session. While the NWB format provides a structured method for storing data, so far there have not been tools which enable searching a collection of NWB files in order to find data of interest for a particular purpose. We describe here three tools to enable searching NWB files. The tools have different features making each of them most useful for a particular task. The first tool, called the NWB Query Engine, is written in Java. It allows searching the complete content of NWB files. It was designed for the first version of NWB (NWB 1) and supports most (but not all) features of the most recent version (NWB 2). For some searches, it is the fastest tool. The second tool, called "search_nwb" is written in Python and also allow searching the complete contents of NWB files. It works with both NWB 1 and NWB 2, as does the third tool. The third tool, called "nwbindexer" enables searching a collection of NWB files using a two-step process. In the first step, a utility is run which creates an SQLite database containing the metadata in a collection of NWB files. This database is then searched in the second step, using another utility. Once the index is built, this two-step processes allows faster searches than are done by the other tools, but does not enable as complete of searches. All three tools use a simple query language which was developed for this project. Software integrating the three tools into a web-interface is provided which enables searching NWB files by submitting a web form., (Copyright © 2020 Ježek, Teeters and Sommer.)
- Published
- 2020
- Full Text
- View/download PDF
21. Visual Information Processing in the Ventral Division of the Mouse Lateral Geniculate Nucleus of the Thalamus.
- Author
-
Ciftcioglu UM, Suresh V, Ding KR, Sommer FT, and Hirsch JA
- Subjects
- Animals, Evoked Potentials, Visual physiology, Female, Male, Mice, Mice, Inbred C57BL, Visual Pathways physiology, Geniculate Bodies physiology, Visual Perception physiology
- Abstract
Even though the lateral geniculate nucleus of the thalamus (LGN) is associated with form vision, that is not its sole role. Only the dorsal portion of LGN (dLGN) projects to V1. The ventral division (vLGN) connects subcortically, sending inhibitory projections to sensorimotor structures, including the superior colliculus (SC) and regions associated with certain behavioral states, such as fear (Monavarfeshani et al., 2017; Salay et al., 2018). We combined computational, physiological, and anatomical approaches to explore visual processing in vLGN of mice of both sexes, making comparisons to dLGN and SC for perspective. Compatible with past, qualitative descriptions, the receptive fields we quantified in vLGN were larger than those in dLGN, and most cells preferred bright versus dark stimuli (Harrington, 1997). Dendritic arbors spanned the length and/or width of vLGN and were often asymmetric, positioned to collect input from large but discrete territories. By contrast, arbors in dLGN are compact (Krahe et al., 2011). Consistent with spatially coarse receptive fields in vLGN, visually evoked changes in spike timing were less precise than for dLGN and SC. Notably, however, the membrane currents and spikes of some cells in vLGN displayed gamma oscillations whose phase and strength varied with stimulus pattern, as for SC (Stitt et al., 2013). Thus, vLGN can engage its targets using oscillation-based and conventional rate codes. Finally, dark shadows activate SC and drive escape responses, whereas vLGN prefers bright stimuli. Thus, one function of long-range inhibitory projections from vLGN might be to enable movement by releasing motor targets, such as SC, from suppression. SIGNIFICANCE STATEMENT Only the dorsal lateral geniculate nucleus (dLGN) connects to cortex to serve form vision; the ventral division (vLGN) projects subcortically to sensorimotor nuclei, including the superior colliculus (SC), via long-range inhibitory connections. Here, we asked how vLGN processes visual information, making comparisons with dLGN and SC for perspective. Cells in vLGN versus dLGN had wider dendritic arbors, larger receptive fields, and fired with lower temporal precision, consistent with a modulatory role. Like SC, but not dLGN, visual stimuli entrained oscillations in vLGN, perhaps reflecting shared strategies for visuomotor processing. Finally, most neurons in vLGN preferred bright shapes, whereas dark stimuli activate SC and drive escape behaviors, suggesting that vLGN enables rapid movement by releasing target motor structures from inhibition., (Copyright © 2020 the authors.)
- Published
- 2020
- Full Text
- View/download PDF
22. A simple method for detecting chaos in nature.
- Author
-
Toker D, Sommer FT, and D'Esposito M
- Subjects
- Humans, Heart Rate Determination instrumentation, Nonlinear Dynamics, Stochastic Processes
- Abstract
Chaos, or exponential sensitivity to small perturbations, appears everywhere in nature. Moreover, chaos is predicted to play diverse functional roles in living systems. A method for detecting chaos from empirical measurements should therefore be a key component of the biologist's toolkit. But, classic chaos-detection tools are highly sensitive to measurement noise and break down for common edge cases, making it difficult to detect chaos in domains, like biology, where measurements are noisy. However, newer tools promise to overcome these limitations. Here, we combine several such tools into an automated processing pipeline, and show that our pipeline can detect the presence (or absence) of chaos in noisy recordings, even for difficult edge cases. As a first-pass application of our pipeline, we show that heart rate variability is not chaotic as some have proposed, and instead reflects a stochastic process in both health and disease. Our tool is easy-to-use and freely available., Competing Interests: Competing interestsThe authors declare no competing interests., (© The Author(s) 2020.)
- Published
- 2020
- Full Text
- View/download PDF
23. Robust computation with rhythmic spike patterns.
- Author
-
Frady EP and Sommer FT
- Subjects
- Humans, Action Potentials physiology, Interneurons physiology, Memory physiology, Models, Neurological, Nerve Net physiology, Neural Networks, Computer
- Abstract
Information coding by precise timing of spikes can be faster and more energy efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a type of attractor neural network in complex state space and show how it can be leveraged to construct spiking neural networks with robust computational properties through a phase-to-timing mapping. Building on Hebbian neural associative memories, like Hopfield networks, we first propose threshold phasor associative memory (TPAM) networks. Complex phasor patterns whose components can assume continuous-valued phase angles and binary magnitudes can be stored and retrieved as stable fixed points in the network dynamics. TPAM achieves high memory capacity when storing sparse phasor patterns, and we derive the energy function that governs its fixed-point attractor dynamics. Second, we construct 2 spiking neural networks to approximate the complex algebraic computations in TPAM, a reductionist model with resonate-and-fire neurons and a biologically plausible network of integrate-and-fire neurons with synaptic delays and recurrently connected inhibitory interneurons. The fixed points of TPAM correspond to stable periodic states of precisely timed spiking activity that are robust to perturbation. The link established between rhythmic firing patterns and complex attractor dynamics has implications for the interpretation of spike patterns seen in neuroscience and can serve as a framework for computation in emerging neuromorphic devices., Competing Interests: The authors declare no conflict of interest., (Copyright © 2019 the Author(s). Published by PNAS.)
- Published
- 2019
- Full Text
- View/download PDF
24. Information integration in large brain networks.
- Author
-
Toker D and Sommer FT
- Subjects
- Animals, Computational Biology, Macaca, Cerebral Cortex physiology, Information Theory, Models, Neurological, Nerve Net physiology
- Abstract
An outstanding problem in neuroscience is to understand how information is integrated across the many modules of the brain. While classic information-theoretic measures have transformed our understanding of feedforward information processing in the brain's sensory periphery, comparable measures for information flow in the massively recurrent networks of the rest of the brain have been lacking. To address this, recent work in information theory has produced a sound measure of network-wide "integrated information", which can be estimated from time-series data. But, a computational hurdle has stymied attempts to measure large-scale information integration in real brains. Specifically, the measurement of integrated information involves a combinatorial search for the informational "weakest link" of a network, a process whose computation time explodes super-exponentially with network size. Here, we show that spectral clustering, applied on the correlation matrix of time-series data, provides an approximate but robust solution to the search for the informational weakest link of large networks. This reduces the computation time for integrated information in large systems from longer than the lifespan of the universe to just minutes. We evaluate this solution in brain-like systems of coupled oscillators as well as in high-density electrocortigraphy data from two macaque monkeys, and show that the informational "weakest link" of the monkey cortex splits posterior sensory areas from anterior association areas. Finally, we use our solution to provide evidence in support of the long-standing hypothesis that information integration is maximized by networks with a high global efficiency, and that modular network structures promote the segregation of information., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2019
- Full Text
- View/download PDF
25. A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks.
- Author
-
Frady EP, Kleyko D, and Sommer FT
- Subjects
- Algorithms, Computer Simulation, Humans, Memory, Short-Term physiology, Models, Neurological, Neural Networks, Computer, Neurons physiology
- Abstract
To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and crosstalk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are matched. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.
- Published
- 2018
- Full Text
- View/download PDF
26. Spatial scale of receptive fields in the visual sector of the cat thalamic reticular nucleus.
- Author
-
Soto-Sánchez C, Wang X, Vaingankar V, Sommer FT, and Hirsch JA
- Subjects
- Animals, Attention, Cats, Geniculate Bodies anatomy & histology, Neurons cytology, Ventral Thalamic Nuclei anatomy & histology, Visual Pathways anatomy & histology, Action Potentials, Geniculate Bodies physiology, Neurons physiology, Ventral Thalamic Nuclei physiology, Visual Pathways physiology
- Abstract
Inhibitory projections from the visual sector of the thalamic reticular nucleus to the lateral geniculate nucleus complete the earliest feedback loop in the mammalian visual pathway and regulate the flow of information from retina to cortex. There are two competing hypotheses about the function of the thalamic reticular nucleus. One regards the structure as a thermostat that uniformly regulates thalamic activity through negative feedback. Alternatively, the searchlight hypothesis argues for a role in focal attentional modulation through positive feedback, consistent with observations that behavioral state influences reticular activity. Here, we address the question of whether cells in the reticular nucleus have receptive fields small enough to provide localized feedback by devising methods to quantify the size of these fields across visual space. Our results show that reticular neurons in the cat operate over discrete spatial scales, at once supporting the searchlight hypothesis and a role in feature selective sensory processing.The searchlight hypothesis proposes that the thalamic reticular nucleus regulates thalamic relay activity through focal attentional modulation. Here the authors show that the receptive field sizes of reticular neurons are small enough to provide localized feedback onto thalamic neurons in the visual pathway.
- Published
- 2017
- Full Text
- View/download PDF
27. Sparse coding of ECoG signals identifies interpretable components for speech control in human sensorimotor cortex.
- Author
-
Bouchard KE, Bujan AF, Chang EF, and Sommer FT
- Subjects
- Brain Mapping, Brain-Computer Interfaces, Humans, Sensorimotor Cortex, Tongue, Electrocorticography, Speech
- Abstract
The concept of sparsity has proven useful to understanding elementary neural computations in sensory systems. However, the role of sparsity in motor regions is poorly understood. Here, we investigated the functional properties of sparse structure in neural activity collected with high-density electrocorticography (ECoG) from speech sensorimotor cortex (vSMC) in neurosurgical patients. Using independent components analysis (ICA), we found individual components corresponding to individual major oral articulators (i.e., Coronal Tongue, Dorsal Tongue, Lips), which were selectively activated during utterances that engaged that articulator on single trials. Some of the components corresponded to spatially sparse activations. Components with similar properties were also extracted using convolutional sparse coding (CSC), and required less data pre-processing. Finally, individual utterances could be accurately decoded from vSMC ECoG recordings using linear classifiers trained on the high-dimensional sparse codes generated by CSC. Together, these results suggest that sparse coding may be an important framework and tool for understanding sensory-motor activity generating complex behaviors, and may be useful for brain-machine interfaces.
- Published
- 2017
- Full Text
- View/download PDF
28. Enabling an Open Data Ecosystem for the Neurosciences.
- Author
-
Wiener M, Sommer FT, Ives ZG, Poldrack RA, and Litt B
- Published
- 2016
- Full Text
- View/download PDF
29. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination.
- Author
-
Bouchard KE, Aimone JB, Chun M, Dean T, Denker M, Diesmann M, Donofrio DD, Frank LM, Kasthuri N, Koch C, Ruebel O, Simon HD, Sommer FT, and Prabhat
- Subjects
- Humans, Computing Methodologies, Information Dissemination methods, Information Systems organization & administration, Neurosciences methods
- Abstract
Opportunities offered by new neuro-technologies are threatened by lack of coherent plans to analyze, manage, and understand the data. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations., (Published by Elsevier Inc.)
- Published
- 2016
- Full Text
- View/download PDF
30. Synaptic Contributions to Receptive Field Structure and Response Properties in the Rodent Lateral Geniculate Nucleus of the Thalamus.
- Author
-
Suresh V, Çiftçioğlu UM, Wang X, Lala BM, Ding KR, Smith WA, Sommer FT, and Hirsch JA
- Subjects
- Animals, Brain Mapping, Cats, Female, Male, Mice, Mice, Inbred C57BL, Models, Neurological, Neural Inhibition physiology, Rats, Rats, Long-Evans, Retinal Ganglion Cells physiology, Species Specificity, Synaptic Transmission physiology, Geniculate Bodies physiology, Nerve Net physiology, Synapses physiology, Visual Fields physiology, Visual Pathways physiopathology, Visual Perception physiology
- Abstract
Comparative physiological and anatomical studies have greatly advanced our understanding of sensory systems. Many lines of evidence show that the murine lateral geniculate nucleus (LGN) has unique attributes, compared with other species such as cat and monkey. For example, in rodent, thalamic receptive field structure is markedly diverse, and many cells are sensitive to stimulus orientation and direction. To explore shared and different strategies of synaptic integration across species, we made whole-cell recordings in vivo from the murine LGN during the presentation of visual stimuli, analyzed the results with different computational approaches, and compared our findings with those from cat. As for carnivores, murine cells with classical center-surround receptive fields had a "push-pull" structure of excitation and inhibition within a given On or Off subregion. These cells compose the largest single population in the murine LGN (∼40%), indicating that push-pull is key in the form vision pathway across species. For two cell types with overlapping On and Off responses, which recalled either W3 or suppressed-by-contrast ganglion cells in murine retina, inhibition took a different form and was most pronounced for spatially extensive stimuli. Other On-Off cells were selective for stimulus orientation and direction. In these cases, retinal inputs were tuned and, for oriented cells, the second-order subunit of the receptive field predicted the preferred angle. By contrast, suppression was not tuned and appeared to sharpen stimulus selectivity. Together, our results provide new perspectives on the role of excitation and inhibition in retinothalamic processing., Significance Statement: We explored the murine lateral geniculate nucleus from a comparative physiological perspective. In cat, most retinal cells have center-surround receptive fields and push-pull excitation and inhibition, including neurons with the smallest (highest acuity) receptive fields. The same is true for thalamic relay cells. In mouse retina, the most numerous cell type has the smallest receptive fields but lacks push-pull. The most common receptive field in rodent thalamus, however, is center-surround with push-pull. Thus, receptive field structure supersedes size per se for form vision. Further, for many orientation-selective cells, the second-order component of the receptive field aligned with stimulus preference, whereas suppression was untuned. Thus, inhibition may improve spatial resolution and sharpen other forms of selectivity in rodent lateral geniculate nucleus., (Copyright © 2016 the authors 0270-6474/16/3610949-15$15.00/0.)
- Published
- 2016
- Full Text
- View/download PDF
31. Structural Plasticity, Effectual Connectivity, and Memory in Cortex.
- Author
-
Knoblauch A and Sommer FT
- Abstract
Learning and memory is commonly attributed to the modification of synaptic strengths in neuronal networks. More recent experiments have also revealed a major role of structural plasticity including elimination and regeneration of synapses, growth and retraction of dendritic spines, and remodeling of axons and dendrites. Here we work out the idea that one likely function of structural plasticity is to increase "effectual connectivity" in order to improve the capacity of sparsely connected networks to store Hebbian cell assemblies that are supposed to represent memories. For this we define effectual connectivity as the fraction of synaptically linked neuron pairs within a cell assembly representing a memory. We show by theory and numerical simulation the close links between effectual connectivity and both information storage capacity of neural networks and effective connectivity as commonly employed in functional brain imaging and connectome analysis. Then, by applying our model to a recently proposed memory model, we can give improved estimates on the number of cell assemblies that can be stored in a cortical macrocolumn assuming realistic connectivity. Finally, we derive a simplified model of structural plasticity to enable large scale simulation of memory phenomena, and apply our model to link ongoing adult structural plasticity to recent behavioral data on the spacing effect of learning.
- Published
- 2016
- Full Text
- View/download PDF
32. Neurodata Without Borders: Creating a Common Data Format for Neurophysiology.
- Author
-
Teeters JL, Godfrey K, Young R, Dang C, Friedsam C, Wark B, Asari H, Peron S, Li N, Peyrache A, Denisov G, Siegle JH, Olsen SR, Martin C, Chun M, Tripathy S, Blanche TJ, Harris K, Buzsáki G, Koch C, Meister M, Svoboda K, and Sommer FT
- Subjects
- Humans, Neurosciences, Pilot Projects, Reproducibility of Results, Research Design standards, Software, Information Dissemination methods, Information Storage and Retrieval standards, Neurophysiology, Software Design
- Abstract
The Neurodata Without Borders (NWB) initiative promotes data standardization in neuroscience to increase research reproducibility and opportunities. In the first NWB pilot project, neurophysiologists and software developers produced a common data format for recordings and metadata of cellular electrophysiology and optical imaging experiments. The format specification, application programming interfaces, and sample datasets have been released., (Copyright © 2015 Elsevier Inc. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
33. How inhibitory circuits in the thalamus serve vision.
- Author
-
Hirsch JA, Wang X, Sommer FT, and Martinez LM
- Subjects
- Animals, Geniculate Bodies physiology, Interneurons physiology, Visual Cortex physiology, Neural Inhibition physiology, Thalamus physiology, Visual Pathways physiology, Visual Perception physiology
- Abstract
Inhibitory neurons dominate the intrinsic circuits in the visual thalamus. Interneurons in the lateral geniculate nucleus innervate relay cells and each other densely to provide powerful inhibition. The visual sector of the overlying thalamic reticular nucleus receives input from relay cells and supplies feedback inhibition to them in return. Together, these two inhibitory circuits influence all information transmitted from the retina to the primary visual cortex. By contrast, relay cells make few local connections. This review explores the role of thalamic inhibition from the dual perspectives of feature detection and information theory. For example, we describe how inhibition sharpens tuning for spatial and temporal features of the stimulus and how it might enhance image perception. We also discuss how inhibitory circuits help to reduce redundancy in signals sent downstream and, at the same time, are adapted to maximize the amount of information conveyed to the cortex.
- Published
- 2015
- Full Text
- View/download PDF
34. Structural synaptic plasticity has high memory capacity and can explain graded amnesia, catastrophic forgetting, and the spacing effect.
- Author
-
Knoblauch A, Körner E, Körner U, and Sommer FT
- Subjects
- Adult, Amnesia pathology, Brain pathology, Brain physiopathology, Cognition physiology, Humans, Nerve Net pathology, Nerve Net physiopathology, Neurons cytology, Neurons pathology, Synapses physiology, Amnesia physiopathology, Memory physiology, Models, Neurological, Neuronal Plasticity
- Abstract
Although already William James and, more explicitly, Donald Hebb's theory of cell assemblies have suggested that activity-dependent rewiring of neuronal networks is the substrate of learning and memory, over the last six decades most theoretical work on memory has focused on plasticity of existing synapses in prewired networks. Research in the last decade has emphasized that structural modification of synaptic connectivity is common in the adult brain and tightly correlated with learning and memory. Here we present a parsimonious computational model for learning by structural plasticity. The basic modeling units are "potential synapses" defined as locations in the network where synapses can potentially grow to connect two neurons. This model generalizes well-known previous models for associative learning based on weight plasticity. Therefore, existing theory can be applied to analyze how many memories and how much information structural plasticity can store in a synapse. Surprisingly, we find that structural plasticity largely outperforms weight plasticity and can achieve a much higher storage capacity per synapse. The effect of structural plasticity on the structure of sparsely connected networks is quite intuitive: Structural plasticity increases the "effectual network connectivity", that is, the network wiring that specifically supports storage and recall of the memories. Further, this model of structural plasticity produces gradients of effectual connectivity in the course of learning, thereby explaining various cognitive phenomena including graded amnesia, catastrophic forgetting, and the spacing effect.
- Published
- 2014
- Full Text
- View/download PDF
35. Spatially distributed local fields in the hippocampus encode rat position.
- Author
-
Agarwal G, Stevenson IH, Berényi A, Mizuseki K, Buzsáki G, and Sommer FT
- Subjects
- Animals, Hippocampus cytology, Maze Learning, Neurons physiology, Periodicity, Rats, Running, Spatio-Temporal Analysis, Theta Rhythm, Hippocampus physiology, Synaptic Potentials physiology
- Abstract
Although neuronal spikes can be readily detected from extracellular recordings, synaptic and subthreshold activity remains undifferentiated within the local field potential (LFP). In the hippocampus, neurons discharge selectively when the rat is at certain locations, while LFPs at single anatomical sites exhibit no such place-tuning. Nonetheless, because the representation of position is sparse and distributed, we hypothesized that spatial information can be recovered from multiple-site LFP recordings. Using high-density sampling of LFP and computational methods, we show that the spatiotemporal structure of the theta rhythm can encode position as robustly as neuronal spiking populations. Because our approach exploits the rhythmicity and sparse structure of neural activity, features found in many brain regions, it is useful as a general tool for discovering distributed LFP codes.
- Published
- 2014
- Full Text
- View/download PDF
36. Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image.
- Author
-
Martinez LM, Molano-Mazón M, Wang X, Sommer FT, and Hirsch JA
- Subjects
- Animals, Bayes Theorem, Cats, Geniculate Bodies physiology, Models, Neurological, Neurons physiology, Photic Stimulation methods, Retinal Ganglion Cells physiology, Visual Cortex physiology, Retina physiology, Thalamus physiology, Visual Fields physiology, Visual Pathways physiology
- Abstract
It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing., (Copyright © 2014 Elsevier Inc. All rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
37. Maximal mutual information, not minimal entropy, for escaping the "Dark Room".
- Author
-
Little DY and Sommer FT
- Subjects
- Humans, Attention physiology, Brain physiology, Cognition physiology, Cognitive Science trends, Perception physiology
- Abstract
A behavioral drive directed solely at minimizing prediction error would cause an agent to seek out states of unchanging, and thus easily predictable, sensory inputs (such as a dark room). The default to an evolutionarily encoded prior to avoid such untenable behaviors is unsatisfying. We suggest an alternate information theoretic interpretation to address this dilemma.
- Published
- 2013
- Full Text
- View/download PDF
38. Learning and exploration in action-perception loops.
- Author
-
Little DY and Sommer FT
- Subjects
- Animals, Humans, Artificial Intelligence, Bayes Theorem, Exploratory Behavior, Perception
- Abstract
Discovering the structure underlying observed data is a recurring problem in machine learning with important applications in neuroscience. It is also a primary function of the brain. When data can be actively collected in the context of a closed action-perception loop, behavior becomes a critical determinant of learning efficiency. Psychologists studying exploration and curiosity in humans and animals have long argued that learning itself is a primary motivator of behavior. However, the theoretical basis of learning-driven behavior is not well understood. Previous computational studies of behavior have largely focused on the control problem of maximizing acquisition of rewards and have treated learning the structure of data as a secondary objective. Here, we study exploration in the absence of external reward feedback. Instead, we take the quality of an agent's learned internal model to be the primary objective. In a simple probabilistic framework, we derive a Bayesian estimate for the amount of information about the environment an agent can expect to receive by taking an action, a measure we term the predicted information gain (PIG). We develop exploration strategies that approximately maximize PIG. One strategy based on value-iteration consistently learns faster than previously developed reward-free exploration strategies across a diverse range of environments. Psychologists believe the evolutionary advantage of learning-driven exploration lies in the generalized utility of an accurate internal model. Consistent with this hypothesis, we demonstrate that agents which learn more efficiently during exploration are later better able to accomplish a range of goal-directed tasks. We will conclude by discussing how our work elucidates the explorative behaviors of animals and humans, its relationship to other computational models of behavior, and its potential application to experimental design, such as in closed-loop neurophysiology studies.
- Published
- 2013
- Full Text
- View/download PDF
39. Neurons in the thalamic reticular nucleus are selective for diverse and complex visual features.
- Author
-
Vaingankar V, Soto-Sanchez C, Wang X, Sommer FT, and Hirsch JA
- Abstract
All visual signals the cortex receives are influenced by the perigeniculate sector (PGN) of the thalamic reticular nucleus, which receives input from relay cells in the lateral geniculate and provides feedback inhibition in return. Relay cells have been studied in quantitative depth; they behave in a roughly linear fashion and have receptive fields with a stereotyped center-surround structure. We know far less about reticular neurons. Qualitative studies indicate they simply pool ascending input to generate non-selective gain control. Yet the perigeniculate is complicated; local cells are densely interconnected and fire lengthy bursts. Thus, we employed quantitative methods to explore the perigeniculate using relay cells as controls. By adapting methods of spike-triggered averaging and covariance analysis for bursts, we identified both first and second order features that build reticular receptive fields. The shapes of these spatiotemporal subunits varied widely; no stereotyped pattern emerged. Companion experiments showed that the shape of the first but not second order features could be explained by the overlap of On and Off inputs to a given cell. Moreover, we assessed the predictive power of the receptive field and how much information each component subunit conveyed. Linear-non-linear (LN) models including multiple subunits performed better than those made with just one; further each subunit encoded different visual information. Model performance for reticular cells was always lesser than for relay cells, however, indicating that reticular cells process inputs non-linearly. All told, our results suggest that the perigeniculate encodes diverse visual features to selectively modulate activity transmitted downstream.
- Published
- 2012
- Full Text
- View/download PDF
40. Inhibitory circuits for visual processing in thalamus.
- Author
-
Wang X, Sommer FT, and Hirsch JA
- Subjects
- Action Potentials, Animals, Humans, Models, Neurological, Photic Stimulation, Thalamus physiology, Visual Pathways physiology, Interneurons physiology, Nerve Net physiology, Neural Inhibition physiology, Synapses physiology, Thalamus cytology
- Abstract
Synapses made by local interneurons dominate the intrinsic circuitry of the mammalian visual thalamus and influence all signals traveling from the eye to cortex. Here we draw on physiological and computational analyses of receptive fields in the cat's lateral geniculate nucleus to describe how inhibition helps to enhance selectivity for stimulus features in space and time and to improve the efficiency of the neural code. Further, we explore specialized synaptic attributes of relay cells and interneurons and discuss how these might be adapted to preserve the temporal precision of retinal spike trains and thereby maximize the rate of information transmitted downstream., (Copyright © 2011 Elsevier Ltd. All rights reserved.)
- Published
- 2011
- Full Text
- View/download PDF
41. Thalamic interneurons and relay cells use complementary synaptic mechanisms for visual processing.
- Author
-
Wang X, Vaingankar V, Soto Sanchez C, Sommer FT, and Hirsch JA
- Subjects
- Animals, Cats, Cerebral Cortex physiology, Excitatory Postsynaptic Potentials physiology, Female, Inhibitory Postsynaptic Potentials physiology, Membrane Potentials physiology, Models, Neurological, Patch-Clamp Techniques, Photic Stimulation, Neurons physiology, Synapses physiology, Thalamus physiology, Visual Pathways physiology
- Abstract
Synapses made by local interneurons dominate the thalamic circuits that process signals traveling from the eye downstream. The anatomical and physiological differences between interneurons and the (relay) cells that project to cortex are vast. To explore how these differences might influence visual processing, we made intracellular recordings from both classes of cells in vivo in cats. Macroscopically, all receptive fields were similar, consisting of two concentrically arranged subregions in which dark and bright stimuli elicited responses of the reverse sign. Microscopically, however, the responses of the two types of cells had opposite profiles. Excitatory stimuli drove trains of single excitatory postsynaptic potentials in relay cells, but graded depolarizations in interneurons. Conversely, suppressive stimuli evoked smooth hyperpolarizations in relay cells and unitary inhibitory postsynaptic potentials in interneurons. Computational analyses suggested that these complementary patterns of response help to preserve information encoded in the fine timing of retinal spikes and to increase the amount of information transmitted to cortex.
- Published
- 2011
- Full Text
- View/download PDF
42. Recoding of sensory information across the retinothalamic synapse.
- Author
-
Wang X, Hirsch JA, and Sommer FT
- Subjects
- Animals, Cats, Electrophysiology, Models, Neurological, Photic Stimulation, Visual Pathways physiology, Action Potentials physiology, Neurons physiology, Retina physiology, Synapses physiology, Synaptic Transmission physiology, Thalamus physiology
- Abstract
The neural code that represents the world is transformed at each stage of a sensory pathway. These transformations enable downstream neurons to recode information they receive from earlier stages. Using the retinothalamic synapse as a model system, we developed a theoretical framework to identify stimulus features that are inherited, gained, or lost across stages. Specifically, we observed that thalamic spikes encode novel, emergent, temporal features not conveyed by single retinal spikes. Furthermore, we found that thalamic spikes are not only more informative than retinal ones, as expected, but also more independent. Next, we asked how thalamic spikes gain sensitivity to the emergent features. Explicitly, we found that the emergent features are encoded by retinal spike pairs and then recoded into independent thalamic spikes. Finally, we built a model of synaptic transmission that reproduced our observations. Thus, our results established a link between synaptic mechanisms and the recoding of sensory information.
- Published
- 2010
- Full Text
- View/download PDF
43. Exploring the function of neural oscillations in early sensory systems.
- Author
-
Koepsell K, Wang X, Hirsch JA, and Sommer FT
- Abstract
Neuronal oscillations appear throughout the nervous system, in structures as diverse as the cerebral cortex, hippocampus, subcortical nuclei and sense organs. Whether neural rhythms contribute to normal function, are merely epiphenomena, or even interfere with physiological processing are topics of vigorous debate. Sensory pathways are ideal for investigation of oscillatory activity because their inputs can be defined. Thus, we will focus on sensory systems as we ask how neural oscillations arise and how they might encode information about the stimulus. We will highlight recent work in the early visual pathway that shows how oscillations can multiplex different types of signals to increase the amount of information that spike trains encode and transmit. Last, we will describe oscillation-based models of visual processing and explore how they might guide further research.
- Published
- 2010
- Full Text
- View/download PDF
44. Memory capacities for synaptic and structural plasticity.
- Author
-
Knoblauch A, Palm G, and Sommer FT
- Subjects
- Algorithms, Animals, Humans, Mathematical Concepts, Neural Networks, Computer, Synapses physiology, Brain physiology, Memory physiology, Nerve Net physiology, Neuronal Plasticity physiology, Synaptic Transmission physiology
- Abstract
Neural associative networks with plastic synapses have been proposed as computational models of brain functions and also for applications such as pattern recognition and information retrieval. To guide biological models and optimize technical applications, several definitions of memory capacity have been used to measure the efficiency of associative memory. Here we explain why the currently used performance measures bias the comparison between models and cannot serve as a theoretical benchmark. We introduce fair measures for information-theoretic capacity in associative memory that also provide a theoretical benchmark. In neural networks, two types of manipulating synapses can be discerned: synaptic plasticity, the change in strength of existing synapses, and structural plasticity, the creation and pruning of synapses. One of the new types of memory capacity we introduce permits quantifying how structural plasticity can increase the network efficiency by compressing the network structure, for example, by pruning unused synapses. Specifically, we analyze operating regimes in the Willshaw model in which structural plasticity can compress the network structure and push performance to the theoretical benchmark. The amount C of information stored in each synapse can scale with the logarithm of the network size rather than being constant, as in classical Willshaw and Hopfield nets (< or = ln 2 approximately 0.7). Further, the review contains novel technical material: a capacity analysis of the Willshaw model that rigorously controls for the level of retrieval quality, an analysis for memories with a nonconstant number of active units (where C < or = 1/e ln 2 approximately 0.53), and the analysis of the computational complexity of associative memories with and without network compression.
- Published
- 2010
- Full Text
- View/download PDF
45. Learning bimodal structure in audio-visual data.
- Author
-
Monaci G, Vandergheynst P, and Sommer FT
- Subjects
- Acoustic Stimulation, Algorithms, Computer Simulation, Discrimination Learning, Humans, Photic Stimulation, Recognition, Psychology, Speech, Artificial Intelligence, Auditory Perception physiology, Learning physiology, Visual Perception physiology
- Abstract
A novel model is presented to learn bimodally informative structures from audio-visual signals. The signal is represented as a sparse sum of audio-visual kernels. Each kernel is a bimodal function consisting of synchronous snippets of an audio waveform and a spatio-temporal visual basis function. To represent an audio-visual signal, the kernels can be positioned independently and arbitrarily in space and time. The proposed algorithm uses unsupervised learning to form dictionaries of bimodal kernels from audio-visual material. The basis functions that emerge during learning capture salient audio-visual data structures. In addition, it is demonstrated that the learned dictionary can be used to locate sources of sound in the movie frame. Specifically, in sequences containing two speakers, the algorithm can robustly localize a speaker even in the presence of severe acoustic and visual distracters.
- Published
- 2009
- Full Text
- View/download PDF
46. Retinal oscillations carry visual information to cortex.
- Author
-
Koepsell K, Wang X, Vaingankar V, Wei Y, Wang Q, Rathbun DL, Usrey WM, Hirsch JA, and Sommer FT
- Abstract
Thalamic relay cells fire action potentials that transmit information from retina to cortex. The amount of information that spike trains encode is usually estimated from the precision of spike timing with respect to the stimulus. Sensory input, however, is only one factor that influences neural activity. For example, intrinsic dynamics, such as oscillations of networks of neurons, also modulate firing pattern. Here, we asked if retinal oscillations might help to convey information to neurons downstream. Specifically, we made whole-cell recordings from relay cells to reveal retinal inputs (EPSPs) and thalamic outputs (spikes) and then analyzed these events with information theory. Our results show that thalamic spike trains operate as two multiplexed channels. One channel, which occupies a low frequency band (<30 Hz), is encoded by average firing rate with respect to the stimulus and carries information about local changes in the visual field over time. The other operates in the gamma frequency band (40-80 Hz) and is encoded by spike timing relative to retinal oscillations. At times, the second channel conveyed even more information than the first. Because retinal oscillations involve extensive networks of ganglion cells, it is likely that the second channel transmits information about global features of the visual scene.
- Published
- 2009
- Full Text
- View/download PDF
47. Information transmission in oscillatory neural activity.
- Author
-
Koepsell K and Sommer FT
- Subjects
- Animals, Cats, Excitatory Postsynaptic Potentials, Geniculate Bodies physiology, Models, Neurological, Neurons physiology
- Abstract
Periodic neural activity not locked to the stimulus or to motor responses is usually ignored. Here, we present new tools for modeling and quantifying the information transmission based on periodic neural activity that occurs with quasi-random phase relative to the stimulus. We propose a model to reproduce characteristic features of oscillatory spike trains, such as histograms of inter-spike intervals and phase locking of spikes to an oscillatory influence. The proposed model is based on an inhomogeneous Gamma process governed by a density function that is a product of the usual stimulus-dependent rate and a quasi-periodic function. Further, we present an analysis method generalizing the direct method (Rieke et al. in Spikes: exploring the neural code. MIT Press, Cambridge, 1999; Brenner et al. in Neural Comput 12(7):1531-1552, 2000) to assess the information content in such data. We demonstrate these tools on recordings from relay cells in the lateral geniculate nucleus of the cat.
- Published
- 2008
- Full Text
- View/download PDF
48. Data sharing for computational neuroscience.
- Author
-
Teeters JL, Harris KD, Millman KJ, Olshausen BA, and Sommer FT
- Subjects
- Animals, Computational Biology methods, Computational Biology standards, Computer Communication Networks standards, Computer Communication Networks trends, Computer Simulation standards, Electrophysiology standards, Electrophysiology trends, Eye Movements physiology, Humans, Information Storage and Retrieval standards, Information Storage and Retrieval trends, Internet standards, Internet trends, Neurosciences methods, Neurosciences standards, Research Design standards, Research Design trends, Access to Information ethics, Computational Biology trends, Computer Simulation trends, Cooperative Behavior, Databases, Factual, Neurosciences trends
- Abstract
Computational neuroscience is a subfield of neuroscience that develops models to integrate complex experimental data in order to understand brain function. To constrain and test computational models, researchers need access to a wide variety of experimental data. Much of those data are not readily accessible because neuroscientists fall into separate communities that study the brain at different levels and have not been motivated to provide data to researchers outside their community. To foster sharing of neuroscience data, a workshop was held in 2007, bringing together experimental and theoretical neuroscientists, computer scientists, legal experts and governmental observers. Computational neuroscience was recommended as an ideal field for focusing data sharing, and specific methods, strategies and policies were suggested for achieving it. A new funding area in the NSF/NIH Collaborative Research in Computational Neuroscience (CRCNS) program has been established to support data sharing, guided in part by the workshop recommendations. The new funding area is dedicated to the dissemination of high quality data sets with maximum scientific value for computational neuroscience. The first round of the CRCNS data sharing program supports the preparation of data sets which will be publicly available in 2008. These include electrophysiology and behavioral (eye movement) data described towards the end of this article.
- Published
- 2008
- Full Text
- View/download PDF
49. Feedforward excitation and inhibition evoke dual modes of firing in the cat's visual thalamus during naturalistic viewing.
- Author
-
Wang X, Wei Y, Vaingankar V, Wang Q, Koepsell K, Sommer FT, and Hirsch JA
- Subjects
- Action Potentials, Animals, Cats, Electrophysiology, Motion Pictures, Synapses physiology, Thalamus cytology, Visual Pathways cytology, Nature, Neural Inhibition physiology, Neurons physiology, Photic Stimulation methods, Thalamus physiology, Visual Pathways physiology
- Abstract
Thalamic relay cells transmit information from retina to cortex by firing either rapid bursts or tonic trains of spikes. Bursts occur when the membrane voltage is low, as during sleep, because they depend on channels that cannot respond to excitatory input unless they are primed by strong hyperpolarization. Cells fire tonically when depolarized, as during waking. Thus, mode of firing is usually associated with behavioral state. Growing evidence, however, suggests that sensory processing involves both burst and tonic spikes. To ask if visually evoked synaptic responses induce each type of firing, we recorded intracellular responses to natural movies from relay cells and developed methods to map the receptive fields of the excitation and inhibition that the images evoked. In addition to tonic spikes, the movies routinely elicited lasting inhibition from the center of the receptive field that permitted bursts to fire. Therefore, naturally evoked patterns of synaptic input engage dual modes of firing.
- Published
- 2007
- Full Text
- View/download PDF
50. A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields.
- Author
-
Rehn M and Sommer FT
- Subjects
- Animals, Macaca mulatta, Photic Stimulation, Visual Cortex physiology, Visual Pathways physiology, Models, Neurological, Nerve Net physiology, Neural Networks, Computer, Neurons physiology, Visual Cortex cytology, Visual Fields physiology
- Abstract
Computational models of primary visual cortex have demonstrated that principles of efficient coding and neuronal sparseness can explain the emergence of neurones with localised oriented receptive fields. Yet, existing models have failed to predict the diverse shapes of receptive fields that occur in nature. The existing models used a particular "soft" form of sparseness that limits average neuronal activity. Here we study models of efficient coding in a broader context by comparing soft and "bard" forms of neuronal sparseness. As a result of our analyses, we propose a novel network model for visual cortex. The model forms efficient visual representations in which the number of active neurones, rather than mean neuronal activity, is limited. This form of hard sparseness also economises cortical resources like synaptic memory and metabolic energy. Furthermore, our model accurately predicts the distribution of receptive field shapes found in the primary visual cortex of cat and monkey.
- Published
- 2007
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.