1. Entropy Regularization for Mean Field Games with Learning
- Author
-
Thaleia Zariphopoulou, Renyuan Xu, and Xin Guo
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Mathematical optimization ,Computer science ,Horizon ,General Mathematics ,Stability (learning theory) ,Scheduling (production processes) ,Machine Learning (stat.ML) ,Management Science and Operations Research ,Regularization (mathematics) ,Machine Learning (cs.LG) ,Computer Science Applications ,Mean field theory ,Optimization and Control (math.OC) ,Statistics - Machine Learning ,Convergence (routing) ,FOS: Mathematics ,Reinforcement learning ,Entropy (information theory) ,Mathematics - Optimization and Control - Abstract
Entropy regularization has been extensively adopted to improve the efficiency, the stability, and the convergence of algorithms in reinforcement learning. This paper analyzes both quantitatively and qualitatively the impact of entropy regularization for mean field games (MFGs) with learning in a finite time horizon. Our study provides a theoretical justification that entropy regularization yields time-dependent policies and, furthermore, helps stabilizing and accelerating convergence to the game equilibrium. In addition, this study leads to a policy-gradient algorithm with exploration in MFG. With this algorithm, agents are able to learn the optimal exploration scheduling, with stable and fast convergence to the game equilibrium.
- Published
- 2022