Back to Search Start Over

Maximum Entropy Population-Based Training for Zero-Shot Human-AI Coordination

Authors :
Zhao, Rui
Song, Jinming
Yuan, Yufeng
Haifeng, Hu
Gao, Yang
Wu, Yi
Sun, Zhongqian
Wei, Yang
Publication Year :
2021

Abstract

We study the problem of training a Reinforcement Learning (RL) agent that is collaborative with humans without using any human data. Although such agents can be obtained through self-play training, they can suffer significantly from distributional shift when paired with unencountered partners, such as humans. To mitigate this distributional shift, we propose Maximum Entropy Population-based training (MEP). In MEP, agents in the population are trained with our derived Population Entropy bonus to promote both pairwise diversity between agents and individual diversity of agents themselves, and a common best agent is trained by paring with agents in this diversified population via prioritized sampling. The prioritization is dynamically adjusted based on the training progress. We demonstrate the effectiveness of our method MEP, with comparison to Self-Play PPO (SP), Population-Based Training (PBT), Trajectory Diversity (TrajeDi), and Fictitious Co-Play (FCP) in the Overcooked game environment, with partners being human proxy models and real humans. A supplementary video showing experimental results is available at https://youtu.be/Xh-FKD0AAKE.<br />Comment: Accepted by NeurIPS Cooperative AI Workshop, 2021, link: https://www.cooperativeai.com/workshop/neurips-2021#Workshop-Papers. Under review at a conference

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2112.11701
Document Type :
Working Paper