Back to Search Start Over

Iterative Feedback Tuning with automated reference model selection⁎⁎This paper is partially supported by FAIR (Future Artificial Intelligence Research) project, funded by the NextGenerationEU program within the PNRR-PE-AI scheme (M4C2, Investment 1.3, Line on Artificial Intelligence), by the Italian Ministry of Enterprises and Made in Italy in the framework of the project 4DDS (4D Drone Swarms) under grant no. F/310097/01-04/X56 and by the PRIN PNRR project P2022NB77E “A data-driven cooperative framework for the management of distributed energy and water resources” (CUP: D53D23016100001), funded by the NextGeneration EU program.

Authors :
Ickenroth, Tjeerd
Breschi, Valentina
Oomen, Tom
Formentin, Simone
Source :
IFAC-PapersOnLine; January 2024, Vol. 58 Issue: 15 p211-216, 6p
Publication Year :
2024

Abstract

Iterative Feedback Tuning (IFT) is a direct, data-driven control technique, that relies on a reference model to capture the desired behavior of the unknown system. The choice of this hyper-parameter is particularly critical, as it potentially jeopardizes performance and even closed-loop stability. This paper aims to explore the suitability of three search methods (grid search, random search, and successive halving) to automatically tune the reference model from data based on a set of user-defined, soft specifications on the desired closed-loop behavior. To compare the three methods and demonstrate their effectiveness, we consider a benchmark simulation case study on the control of a mass-spring-damper system. From our results, successive halving turns out to be the most efficient way to run IFT with automatic reference model selection on a finite budget of time for data collection.

Details

Language :
English
ISSN :
24058963
Volume :
58
Issue :
15
Database :
Supplemental Index
Journal :
IFAC-PapersOnLine
Publication Type :
Periodical
Accession number :
ejs67435726
Full Text :
https://doi.org/10.1016/j.ifacol.2024.08.530