1. Inverse Reinforcement Q-Learning Through Expert Imitation for Discrete-Time Systems
- Author
-
Patrik Kolaric, Bosen Lian, Jialu Fan, Tianyou Chai, Frank L. Lewis, and Wenqian Xue
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,media_common.quotation_subject ,Stability (learning theory) ,Q-learning ,Inverse ,Function (mathematics) ,Optimal control ,Computer Science Applications ,Discrete time and continuous time ,Artificial Intelligence ,Convergence (routing) ,Artificial intelligence ,business ,Imitation ,Software ,media_common - Abstract
In inverse reinforcement learning (RL), there are two agents. An expert target agent has a performance cost function and exhibits control and state behaviors to a learner. The learner agent does not know the expert's performance cost function but seeks to reconstruct it by observing the expert's behaviors and tries to imitate these behaviors optimally by its own response. In this article, we formulate an imitation problem where the optimal performance intent of a discrete-time (DT) expert target agent is unknown to a DT Learner agent. Using only the observed expert's behavior trajectory, the learner seeks to determine a cost function that yields the same optimal feedback gain as the expert's, and thus, imitates the optimal response of the expert. We develop an inverse RL approach with a new scheme to solve the behavior imitation problem. The approach consists of a cost function update based on an extension of RL policy iteration and inverse optimal control, and a control policy update based on optimal control. Then, under this scheme, we develop an inverse reinforcement Q-learning algorithm, which is an extension of RL Q-learning. This algorithm does not require any knowledge of agent dynamics. Proofs of stability, convergence, and optimality are given. A key property about the nonunique solution is also shown. Finally, simulation experiments are presented to show the effectiveness of the new approach.
- Published
- 2023