Back to Search
Start Over
Data-Driven Integral Reinforcement Learning for Continuous-Time Non-Zero-Sum Games
- Source :
- IEEE Access, Vol 7, Pp 82901-82912 (2019)
- Publication Year :
- 2019
- Publisher :
- IEEE, 2019.
-
Abstract
- This paper develops an integral value iteration (VI) method to efficiently find online the Nash equilibrium solution of two-player non-zero-sum (NZS) differential games for linear systems with partially unknown dynamics. To guarantee the closed-loop stability about the Nash equilibrium, the explicit upper bound for the discounted factor is given. To show the efficacy of the presented online model-free solution, the integral VI method is compared with the model-based off-line policy iteration method. Moreover, the theoretical analysis of the integral VI algorithm in terms of three aspects, i.e., positive definiteness properties of the updated cost functions, the stability of the closed-loop systems, and the conditions that guarantee the monotone convergence, is provided in detail. Finally, the simulation results demonstrate the efficacy of the presented algorithms.
Details
- Language :
- English
- ISSN :
- 21693536
- Volume :
- 7
- Database :
- Directory of Open Access Journals
- Journal :
- IEEE Access
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.feb3c63a32d04703825529907c53784f
- Document Type :
- article
- Full Text :
- https://doi.org/10.1109/ACCESS.2019.2923845