Back to Search Start Over

Learning Zero-Shot Cooperation with Humans, Assuming Humans Are Biased

Authors :
Yu, Chao
Gao, Jiaxuan
Liu, Weilin
Xu, Botian
Tang, Hao
Yang, Jiaqi
Wang, Yu
Wu, Yi
Publication Year :
2023

Abstract

There is a recent trend of applying multi-agent reinforcement learning (MARL) to train an agent that can cooperate with humans in a zero-shot fashion without using any human data. The typical workflow is to first repeatedly run self-play (SP) to build a policy pool and then train the final adaptive policy against this pool. A crucial limitation of this framework is that every policy in the pool is optimized w.r.t. the environment reward function, which implicitly assumes that the testing partners of the adaptive policy will be precisely optimizing the same reward function as well. However, human objectives are often substantially biased according to their own preferences, which can differ greatly from the environment reward. We propose a more general framework, Hidden-Utility Self-Play (HSP), which explicitly models human biases as hidden reward functions in the self-play objective. By approximating the reward space as linear functions, HSP adopts an effective technique to generate an augmented policy pool with biased policies. We evaluate HSP on the Overcooked benchmark. Empirical results show that our HSP method produces higher rewards than baselines when cooperating with learned human models, manually scripted policies, and real humans. The HSP policy is also rated as the most assistive policy based on human feedback.<br />Comment: The first two authors share equal contributions. This paper is accepted by ICLR 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2302.01605
Document Type :
Working Paper