Back to Search Start Over

Regret Bounds for Expected Improvement Algorithms in Gaussian Process Bandit Optimization

Authors :
Tran-The, Hung
Gupta, Sunil
Rana, Santu
Venkatesh, Svetha
Publication Year :
2022

Abstract

The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty due to its simplicity and efficiency. Despite its popularity, the theoretical aspects of this algorithm have not been properly analyzed. In particular, whether in the noisy setting, the EI strategy with a standard incumbent converges is still an open question of the Gaussian process bandit optimization problem. We aim to answer this question by proposing a variant of EI with a standard incumbent defined via the GP predictive mean. We prove that our algorithm converges, and achieves a cumulative regret bound of $\mathcal O(\gamma_T\sqrt{T})$, where $\gamma_T$ is the maximum information gain between $T$ observations and the Gaussian process model. Based on this variant of EI, we further propose an algorithm called Improved GP-EI that converges faster than previous counterparts. In particular, our proposed variants of EI do not require the knowledge of the RKHS norm and the noise's sub-Gaussianity parameter as in previous works. Empirical validation in our paper demonstrates the effectiveness of our algorithms compared to several baselines.<br />Comment: AISTATS 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2203.07875
Document Type :
Working Paper