1. Specified convergence rate guaranteed output tracking of discrete-time systems via reinforcement learning.
- Author
-
Huang, Chengjie, Chen, Ci, Xie, Kan, Lewis, Frank L., and Xie, Shengli
- Subjects
- *
DISCRETE-time systems , *REINFORCEMENT learning , *ITERATIVE learning control , *LINEAR systems - Abstract
Toward the aim of zero tracking error and user-specified convergence rate, a data-driven output tracking design for unknown linear discrete-time systems is investigated in this work. For policy learning, we customize a virtual auxiliary system, based on which an enhanced Bellman equation with the user-specified convergence rate is derived. This allows us not to explicitly measure the time for the system evolution, while only the input–output data are sufficient in our design. By utilizing the robust output regulation technique, we can learn an optimal tracker for such an auxiliary system via policy iteration and value iteration. It is proved that the output tracking error of the original discrete-time system converges to zero at the specified convergence rate. The effectiveness of the proposed algorithms is illustrated by a simulation example. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF