Back to Search Start Over

Reinforcement Learning Controller Design for Affine Nonlinear Discrete-Time Systems using Online Approximators.

Authors :
Yang, Qinmin
Jagannathan, Sarangapani
Source :
IEEE Transactions on Systems, Man & Cybernetics: Part B. Apr2012, Vol. 42 Issue 2, p377-390. 14p.
Publication Year :
2012

Abstract

In this paper, reinforcement learning state- and output-feedback-based adaptive critic controller designs are proposed by using the online approximators (OLAs) for a general multi-input and multioutput affine unknown nonlinear discretetime systems in the presence of bounded disturbances. The proposed controller design has two entities, an action network that is designed to produce optimal signal and a critic network that evaluates the performance of the action network. The critic estimates the cost-to-go function which is tuned online using recursive equations derived from heuristic dynamic programming. Here, neural networks (NNs) are used both for the action and critic whereas any OLAs, such as radial basis functions, splines, fuzzy logic, etc., can be utilized. For the output-feedback counterpart, an additional NN is designated as the observer to estimate the unavailable system states, and thus, separation principle is not required. The NN weight tuning laws for the controller schemes are also derived while ensuring uniform ultimate boundedness of the closed-loop system using Lyapunov theory. Finally, the effectiveness of the two controllers is tested in simulation on a pendulum balancing system and a two-link robotic arm system. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10834419
Volume :
42
Issue :
2
Database :
Academic Search Index
Journal :
IEEE Transactions on Systems, Man & Cybernetics: Part B
Publication Type :
Academic Journal
Accession number :
73611246
Full Text :
https://doi.org/10.1109/TSMCB.2011.2166384