Back to Search
Start Over
Bayesian inference for generalized linear models for spiking neurons
- Source :
- Frontiers in Computational Neuroscience, Vol 4 (2010)
- Publication Year :
- 2010
- Publisher :
- Frontiers Media S.A., 2010.
-
Abstract
- Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate.
Details
- Language :
- English
- ISSN :
- 16625188
- Volume :
- 4
- Database :
- Directory of Open Access Journals
- Journal :
- Frontiers in Computational Neuroscience
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.19ec46e317fc4852af22551f738bf950
- Document Type :
- article
- Full Text :
- https://doi.org/10.3389/fncom.2010.00012