Back to Search Start Over

Inferring hidden structure in multilayered neural circuits

Authors :
David B. Kastner
Stephen A. Baccus
Niru Maheswaranathan
Surya Ganguli
Source :
PLoS Computational Biology, Vol 14, Iss 8, p e1006291 (2018)
Publication Year :
2017
Publisher :
Cold Spring Harbor Laboratory, 2017.

Abstract

A central challenge in sensory neuroscience involves understanding how neural circuits shape computations across cascaded cell layers. Here we develop a computational framework to reconstruct the response properties of experimentally unobserved neurons in the interior of a multilayered neural circuit. We combine non-smooth regularization with proximal consensus algorithms to overcome difficulties in fitting such models that arise from the high dimensionality of their parameter space. Our methods are statistically and computationally efficient, enabling us to rapidly learn hierarchical non-linear models as well as efficiently compute widely used descriptive statistics such as the spike triggered average (STA) and covariance (STC) for high dimensional stimuli. For example, with our regularization framework, we can learn the STA and STC using 5 and 10 minutes of data, respectively, at a level of accuracy that otherwise requires 40 minutes of data without regularization. We apply our framework to retinal ganglion cell processing, learning cascaded linear-nonlinear (LN-LN) models of retinal circuitry, consisting of thousands of parameters, using 40 minutes of responses to white noise. Our models demonstrate a 53% improvement in predicting ganglion cell spikes over classical linear-nonlinear (LN) models. Internal nonlinear subunits of the model match properties of retinal bipolar cells in both receptive field structure and number. Subunits had consistently high thresholds, leading to sparse activity patterns in which only one subunit drives ganglion cell spiking at any time. From the model’s parameters, we predict that the removal of visual redundancies through stimulus decorrelation across space, a central tenet of efficient coding theory, originates primarily from bipolar cell synapses. Furthermore, the composite nonlinear computation performed by retinal circuitry corresponds to a boolean OR function applied to bipolar cell feature detectors. Our general computational framework may aid in extracting principles of nonlinear hierarchical sensory processing across diverse modalities from limited data.Author SummaryComputation in neural circuits arises from the cascaded processing of inputs through multiple cell layers. Each of these cell layers performs operations such as filtering and thresholding in order to shape a circuit’s output. It remains a challenge to describe both the computations and the mechanisms that mediate them given limited data recorded from a neural circuit. A standard approach to describing circuit computation involves building quantitative encoding models that predict the circuit response given its input, but these often fail to map in an interpretable way onto mechanisms within the circuit. In this work, we build two layer linear-nonlinear cascade models (LN-LN) in order to describe how the retinal output is shaped by nonlinear mechanisms in the inner retina. We find that these LN-LN models, fit to ganglion cell recordings alone, identify filters and nonlinearities that are readily mapped onto individual circuit components inside the retina, namely bipolar cells and the bipolar-to-ganglion cell synaptic threshold. This work demonstrates how combining simple prior knowledge of circuit properties with partial experimental recordings of a neural circuit’s output can yield interpretable models of the entire circuit computation, including parts of the circuit that are hidden or not directly observed in neural recordings.

Details

Database :
OpenAIRE
Journal :
PLoS Computational Biology, Vol 14, Iss 8, p e1006291 (2018)
Accession number :
edsair.doi.dedup.....36b6cef0a76918e15cbe3fd1be0c09d4