Back to Search Start Over

Beyond ℓ1 sparse coding in V1.

Authors :
Rentzeperis, Ilias
Calatroni, Luca
Perrinet, Laurent U.
Prandi, Dario
Source :
PLoS Computational Biology. 9/12/2023, Vol. 19 Issue 9, p1-21. 21p. 8 Graphs.
Publication Year :
2023

Abstract

Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ1 norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the ℓ1 norm is highly suboptimal compared to other functions suited to approximating ℓp with 0 ≤ p < 1 (including recently proposed continuous exact relaxations), in terms of performance. We show that ℓ1 sparsity employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. More specifically, at the same sparsity level, the thresholding algorithm using the ℓ1 norm as a penalty requires a dictionary of ten times more units compared to the proposed approach, where a non-convex continuous relaxation of the ℓ0 pseudo-norm is used, to reconstruct the external stimulus equally well. At a fixed sparsity level, both ℓ0- and ℓ1-based regularization develop units with receptive field (RF) shapes similar to biological neurons in V1 (and a subset of neurons in V2), but ℓ0-based regularization shows approximately five times better reconstruction of the stimulus. Our results in conjunction with recent metabolic findings indicate that for V1 to operate efficiently it should follow a coding regime which uses a regularization that is closer to the ℓ0 pseudo-norm rather than the ℓ1 one, and suggests a similar mode of operation for the sensory cortex in general. Author summary: Recordings in the brain indicate that relatively few sensory neurons are active at any instant. This so called sparse coding is considered a hallmark of efficiency in the encoding of natural stimuli by sensory neurons. Computational works have shown that if we add sparse activity as an optimization term in a generative model encoding natural images then the model will learn units with receptive fields (RFs) similar to the neurons in the primary visual cortex (V1). Traditionally, computational models have used the ℓ1 norm as the sparsity term to be minimized, because of its convexity and claims of optimality. Here we show that by using sparsity inducing regularizers that approximate the ℓ0 pseudo-norm, we get sparser activations for the same quality of encoding. Moreover, for a certain level of sparsity, both ℓ0 and ℓ1 based generative models produce RFs similar to V1 biological neurons, but the ℓ1 model has five times worse encoding performance. Our study thus shows that sparsity-inducing regularizers approaching the ℓ0 pseudo-norm are more appropriate for modelling biological vision from an efficiency point of view. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1553734X
Volume :
19
Issue :
9
Database :
Academic Search Index
Journal :
PLoS Computational Biology
Publication Type :
Academic Journal
Accession number :
171896246
Full Text :
https://doi.org/10.1371/journal.pcbi.1011459