Back to Search
Start Over
Confidence resets reveal hierarchical adaptive learning in humans.
- Source :
- PLoS Computational Biology, Vol 15, Iss 4, p e1006972 (2019)
- Publication Year :
- 2019
- Publisher :
- Public Library of Science (PLoS), 2019.
-
Abstract
- Hierarchical processing is pervasive in the brain, but its computational significance for learning under uncertainty is disputed. On the one hand, hierarchical models provide an optimal framework and are becoming increasingly popular to study cognition. On the other hand, non-hierarchical (flat) models remain influential and can learn efficiently, even in uncertain and changing environments. Here, we show that previously proposed hallmarks of hierarchical learning, which relied on reports of learned quantities or choices in simple experiments, are insufficient to categorically distinguish hierarchical from flat models. Instead, we present a novel test which leverages a more complex task, whose hierarchical structure allows generalization between different statistics tracked in parallel. We use reports of confidence to quantitatively and qualitatively arbitrate between the two accounts of learning. Our results support the hierarchical learning framework, and demonstrate how confidence can be a useful metric in learning theory.
- Subjects :
- Biology (General)
QH301-705.5
Subjects
Details
- Language :
- English
- ISSN :
- 1553734X and 15537358
- Volume :
- 15
- Issue :
- 4
- Database :
- Directory of Open Access Journals
- Journal :
- PLoS Computational Biology
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.bb3ffc11f55c424fb33036f469ace7ed
- Document Type :
- article
- Full Text :
- https://doi.org/10.1371/journal.pcbi.1006972