Back to Search Start Over

Moderate confirmation bias enhances decision-making in groups of reinforcement-learning agents.

Authors :
Bergerot, Clémence
Barfuss, Wolfram
Romanczuk, Pawel
Source :
PLoS Computational Biology. 9/4/2024, Vol. 20 Issue 9, p1-22. 22p.
Publication Year :
2024

Abstract

Humans tend to give more weight to information confirming their beliefs than to information that disconfirms them. Nevertheless, this apparent irrationality has been shown to improve individual decision-making under uncertainty. However, little is known about this bias' impact on decision-making in a social context. Here, we investigate the conditions under which confirmation bias is beneficial or detrimental to decision-making under social influence. To do so, we develop a Collective Asymmetric Reinforcement Learning (CARL) model in which artificial agents observe others' actions and rewards, and update this information asymmetrically. We use agent-based simulations to study how confirmation bias affects collective performance on a two-armed bandit task, and how resource scarcity, group size and bias strength modulate this effect. We find that a confirmation bias benefits group learning across a wide range of resource-scarcity conditions. Moreover, we discover that, past a critical bias strength, resource abundance favors the emergence of two different performance regimes, one of which is suboptimal. In addition, we find that this regime bifurcation comes with polarization in small groups of agents. Overall, our results suggest the existence of an optimal, moderate level of confirmation bias for decision-making in a social context. Author summary: When we give more weight to information that confirms our existing beliefs, it typically has a negative impact on learning and decision-making. However, our study shows that a moderate confirmation bias can actually improve decision-making when multiple reinforcement learning agents learn together in a social context. This finding has important implications for policymakers who engage in fighting against societal polarization and the spreading of misinformation. It can also inspire the development of artificial, distributed learning algorithms. Based on our research, we recommend not directly targeting confirmation bias but instead focusing on its underlying factors, such as group size, individual incentives, and the interactions between bias and the environment (such as filter bubbles). [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1553734X
Volume :
20
Issue :
9
Database :
Academic Search Index
Journal :
PLoS Computational Biology
Publication Type :
Academic Journal
Accession number :
179436641
Full Text :
https://doi.org/10.1371/journal.pcbi.1012404