201. Adaptive parameterized model predictive control based on reinforcement learning: A synthesis framework.
- Author
-
Sun, Dingshan, Jamshidnejad, Anahita, and De Schutter, Bart
- Subjects
- *
DEEP reinforcement learning , *REINFORCEMENT learning , *TRAFFIC engineering , *COMPUTATIONAL complexity , *PREDICTION models - Abstract
Parameterized model predictive control (PMPC) is one of the many approaches that have been developed to alleviate the high computational requirement of model predictive control (MPC), and it has been shown to significantly reduce the computational complexity while providing comparable control performance with conventional MPC. However, PMPC methods still require a sufficiently accurate model to guarantee the control performance. To deal with model mismatches caused by the changing environment and by disturbances, this paper first proposes a novel framework that uses reinforcement learning (RL) to adapt all components of the PMPC scheme in an online way. More specifically, the novel framework integrates various strategies to adjust different components of PMPC (e.g., objective function, state-feedback control function, optimization settings, and system model), which results in a synthesis framework for RL-based adaptive PMPC. We show that existing adaptive (P)MPC approaches can also be embedded in this synthesis framework. The resulting combined RL-PMPC framework provides a solution for an efficient MPC approach that can deal with model mismatches. A case study is performed in which the framework is applied to freeway traffic control. Simulation results show that for the given case study the RL-based adaptive PMPC approach reduces computational complexity by 98% on average compared to conventional MPC while achieving better control performance than the other controllers, in the presence of model mismatches and disturbances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF