1. Policies with Sparse Inter-Agent Dependencies in Dynamic Games: A Dynamic Programming Approach
- Author
-
Liu, Xinjie, Li, Jingqi, Fotiadis, Filippos, Karabag, Mustafa O., Milzman, Jesse, Fridovich-Keil, David, and Topcu, Ufuk
- Subjects
Computer Science - Computer Science and Game Theory ,Computer Science - Multiagent Systems ,Computer Science - Robotics ,Electrical Engineering and Systems Science - Systems and Control - Abstract
Common feedback strategies in multi-agent dynamic games require all players' state information to compute control strategies. However, in real-world scenarios, sensing and communication limitations between agents make full state feedback expensive or impractical, and such strategies can become fragile when state information from other agents is inaccurate. To this end, we propose a regularized dynamic programming approach for finding sparse feedback policies that selectively depend on the states of a subset of agents in dynamic games. The proposed approach solves convex adaptive group Lasso problems to compute sparse policies approximating Nash equilibrium solutions. We prove the regularized solutions' asymptotic convergence to a neighborhood of Nash equilibrium policies in linear-quadratic (LQ) games. We extend the proposed approach to general non-LQ games via an iterative algorithm. Empirical results in multi-robot interaction scenarios show that the proposed approach effectively computes feedback policies with varying sparsity levels. When agents have noisy observations of other agents' states, simulation results indicate that the proposed regularized policies consistently achieve lower costs than standard Nash equilibrium policies by up to 77% for all interacting agents whose costs are coupled with other agents' states.
- Published
- 2024