Back to Search Start Over

A new one-point residual-feedback oracle for black-box learning and control.

Authors :
Zhang, Yan
Zhou, Yi
Ji, Kaiyi
Zavlanos, Michael M.
Source :
Automatica. Feb2022, Vol. 136, pN.PAG-N.PAG. 1p.
Publication Year :
2022

Abstract

Zeroth-order optimization (ZO) algorithms have been recently used to solve black-box or simulation-based learning and control problems, where the gradient of the objective function cannot be easily computed but can be approximated using the objective function values. Many existing ZO algorithms adopt two-point feedback schemes due to their fast convergence rate compared to one-point feedback schemes. However, two-point schemes require two evaluations of the objective function at each iteration, which can be impractical in applications where the data are not all available a priori, e.g., in online optimization. In this paper, we propose a novel one-point feedback scheme that queries the function value once at each iteration and estimates the gradient using the residual between two consecutive points. When optimizing a deterministic Lipschitz function, we show that the query complexity of ZO with the proposed one-point residual feedback matches that of ZO with the existing two-point schemes. Moreover, the query complexity of the proposed algorithm can be improved when the objective function has Lipschitz gradient. Then, for stochastic bandit optimization problems where only noisy objective function values are given, we show that ZO with one-point residual feedback achieves the same convergence rate as that of two-point scheme with uncontrollable data samples. We demonstrate the effectiveness of the proposed one-point residual feedback via extensive numerical experiments. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00051098
Volume :
136
Database :
Academic Search Index
Journal :
Automatica
Publication Type :
Academic Journal
Accession number :
154313940
Full Text :
https://doi.org/10.1016/j.automatica.2021.110006