Back to Search Start Over

A New Neuro-Optimal Nonlinear Tracking Control Method via Integral Reinforcement Learning with Applications to Nuclear Systems.

Authors :
Zhong, Weifeng
Wang, Mengxuan
Wei, Qinglai
Lu, Jingwei
Source :
Neurocomputing. Apr2022, Vol. 483, p361-369. 9p.
Publication Year :
2022

Abstract

In this paper, a new infinite horizon optimal tracking control method for continuous-time nonlinear systems is given using an actor-critic structure. This present integral reinforcement learning (IRL) method is a novelty method in adaptive dynamic programming (ADP) algorithms and an online policy iteration algorithm. For the optimal tracking problem, the cost function is defined by tracking errors. Consequently, the goal is to minimize tracking errors toward desired trajectories. Since it is hard to solve the Hamilton-Jacobi-Bellman (HJB) equation for continuous-time nonlinear systems control problems, leveraging the actor-critic architecture with neural networks (NNs) to approximate the tracking error performance index and error control law is necessary. Instead of using conventional neural networks, we employ higher-order polynomials in the whole actor-critic architecture. Finally, we apply this new neuro-optimal tracking method to the 2500MW pressurized water reactor (PWR) nuclear power plant, and simulation results are given to demonstrate the effectiveness of the developed method. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
483
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
155655277
Full Text :
https://doi.org/10.1016/j.neucom.2022.01.034