Back to Search
Start Over
End-to-end reinforcement learning of Koopman models for economic nonlinear model predictive control.
- Source :
-
Computers & Chemical Engineering . Nov2024, Vol. 190, pN.PAG-N.PAG. 1p. - Publication Year :
- 2024
-
Abstract
- (Economic) nonlinear model predictive control ((e)NMPC) requires dynamic models that are sufficiently accurate and computationally tractable. Data-driven surrogate models for mechanistic models can reduce the computational burden of (e)NMPC; however, such models are typically trained by system identification for maximum prediction accuracy on simulation samples and perform suboptimally in (e)NMPC. We present a method for end-to-end reinforcement learning of Koopman surrogate models for optimal performance as part of (e)NMPC. We apply our method to two applications derived from an established nonlinear continuous stirred-tank reactor model. The controller performance is compared to that of (e)NMPCs utilizing models trained using system identification, and model-free neural network controllers trained using reinforcement learning. We show that the end-to-end trained models outperform those trained using system identification in (e)NMPC, and that, in contrast to the neural network controllers, the (e)NMPC controllers can react to changes in the control setting without retraining. • Task-optimal Koopman models for control learned using reinforcement learning. • Reinforcement learning of Koopman models outperforms system identification. • Reinforcement learned Koopman MPCs adapt to environment changes without retraining. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00981354
- Volume :
- 190
- Database :
- Academic Search Index
- Journal :
- Computers & Chemical Engineering
- Publication Type :
- Academic Journal
- Accession number :
- 179238658
- Full Text :
- https://doi.org/10.1016/j.compchemeng.2024.108824