Back to Search Start Over

A role for cortical interneurons as adversarial discriminators.

Authors :
Benjamin, Ari S.
Kording, Konrad P.
Source :
PLoS Computational Biology. 9/28/2023, Vol. 19 Issue 9, p1-26. 26p. 4 Graphs.
Publication Year :
2023

Abstract

The brain learns representations of sensory information from experience, but the algorithms by which it does so remain unknown. One popular theory formalizes representations as inferred factors in a generative model of sensory stimuli, meaning that learning must improve this generative model and inference procedure. This framework underlies many classic computational theories of sensory learning, such as Boltzmann machines, the Wake/Sleep algorithm, and a more recent proposal that the brain learns with an adversarial algorithm that compares waking and dreaming activity. However, in order for such theories to provide insights into the cellular mechanisms of sensory learning, they must be first linked to the cell types in the brain that mediate them. In this study, we examine whether a subtype of cortical interneurons might mediate sensory learning by serving as discriminators, a crucial component in an adversarial algorithm for representation learning. We describe how such interneurons would be characterized by a plasticity rule that switches from Hebbian plasticity during waking states to anti-Hebbian plasticity in dreaming states. Evaluating the computational advantages and disadvantages of this algorithm, we find that it excels at learning representations in networks with recurrent connections but scales poorly with network size. This limitation can be partially addressed if the network also oscillates between evoked activity and generative samples on faster timescales. Consequently, we propose that an adversarial algorithm with interneurons as discriminators is a plausible and testable strategy for sensory learning in biological systems. Author summary: After raw sensory data is received at the periphery, it is transformed by various neural pathways and delivered to the sensory cortex. There, neural activity forms an internal model of the state of the external world, which is updated appropriately by new information. A goal of learning, then, is to learn how to transform information into the appropriate representational form within the brain's internal model. Here, we look to artificial intelligence for new possible theories of how the brain might learn representations that resolve issues with previously proposed theories. We describe how one particular algorithm—adversarial learning—resolves a major issue with previous hypotheses relating to recurrence. Furthermore, this algorithm resembles broad features of the organization and learning dynamics of the brain, such as wake and sleep cycles. Considering seriously how this algorithm would appear if implemented by the brain, we map its features to known physiology and make testable predictions for how neural circuits learn new representations of information. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1553734X
Volume :
19
Issue :
9
Database :
Academic Search Index
Journal :
PLoS Computational Biology
Publication Type :
Academic Journal
Accession number :
172416616
Full Text :
https://doi.org/10.1371/journal.pcbi.1011484