Back to Search Start Over

Specified convergence rate guaranteed output tracking of discrete-time systems via reinforcement learning.

Authors :
Huang, Chengjie
Chen, Ci
Xie, Kan
Lewis, Frank L.
Xie, Shengli
Source :
Automatica. Mar2024, Vol. 161, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Toward the aim of zero tracking error and user-specified convergence rate, a data-driven output tracking design for unknown linear discrete-time systems is investigated in this work. For policy learning, we customize a virtual auxiliary system, based on which an enhanced Bellman equation with the user-specified convergence rate is derived. This allows us not to explicitly measure the time for the system evolution, while only the input–output data are sufficient in our design. By utilizing the robust output regulation technique, we can learn an optimal tracker for such an auxiliary system via policy iteration and value iteration. It is proved that the output tracking error of the original discrete-time system converges to zero at the specified convergence rate. The effectiveness of the proposed algorithms is illustrated by a simulation example. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00051098
Volume :
161
Database :
Academic Search Index
Journal :
Automatica
Publication Type :
Academic Journal
Accession number :
175104052
Full Text :
https://doi.org/10.1016/j.automatica.2023.111490