Back to Search Start Over

Statistical discrimination in learning agents

Authors :
Duéñez-Guzmán, Edgar A.
McKee, Kevin R.
Mao, Yiran
Coppin, Ben
Chiappa, Silvia
Vezhnevets, Alexander Sasha
Bakker, Michiel A.
Bachrach, Yoram
Sadedin, Suzanne
Isaac, William
Tuyls, Karl
Leibo, Joel Z.
Publication Year :
2021

Abstract

Undesired bias afflicts both human and algorithmic decision making, and may be especially prevalent when information processing trade-offs incentivize the use of heuristics. One primary example is \textit{statistical discrimination} -- selecting social partners based not on their underlying attributes, but on readily perceptible characteristics that covary with their suitability for the task at hand. We present a theoretical model to examine how information processing influences statistical discrimination and test its predictions using multi-agent reinforcement learning with various agent architectures in a partner choice-based social dilemma. As predicted, statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture. All agents showed substantial statistical discrimination, defaulting to using the readily available correlates instead of the outcome relevant features. We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias. However, all agent algorithms we tried still exhibited substantial bias after learning in biased training populations.<br />Comment: 29 pages, 10 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2110.11404
Document Type :
Working Paper