Back to Search Start Over

Incremental reinforcement learning and optimal output regulation under unmeasurable disturbances.

Authors :
Zhao, Jianguo
Yang, Chunyu
Gao, Weinan
Park, Ju H.
Source :
Automatica. Feb2024, Vol. 160, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

In this paper, we propose novel data-driven optimal dynamic controller design frameworks, via both state-feedback and output-feedback, for solving optimal output regulation problems of linear discrete-time systems subject to unknown dynamics and unmeasurable disturbances using reinforcement learning (RL). Fundamentally different from existing research on optimal output regulation problems and RL, the proposed procedures can determine both the optimal control gain and the optimal dynamic compensator simultaneously instead of presetting a non-optimal dynamic compensator. Moreover, we present incremental dataset-based RL algorithms to learn the optimal dynamic controllers that do not require the measurements of the external disturbance and the exostate during learning, which is of great practical importance. Besides, we show that the proposed incremental dataset-based learning methods are more robust to a class of measurement noises with arbitrary magnitudes than routine RL algorithms. Comprehensive simulation results validate the efficacy of our methodologies. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00051098
Volume :
160
Database :
Academic Search Index
Journal :
Automatica
Publication Type :
Academic Journal
Accession number :
174580288
Full Text :
https://doi.org/10.1016/j.automatica.2023.111468