Back to Search Start Over

Linear Bandit Algorithms with Sublinear Time Complexity

Authors :
Yang, Shuo
Ren, Tongzheng
Shakkottai, Sanjay
Price, Eric
Dhillon, Inderjit S.
Sanghavi, Sujay
Publication Year :
2021

Abstract

We propose two linear bandits algorithms with per-step complexity sublinear in the number of arms $K$. The algorithms are designed for applications where the arm set is extremely large and slowly changing. Our key realization is that choosing an arm reduces to a maximum inner product search (MIPS) problem, which can be solved approximately without breaking regret guarantees. Existing approximate MIPS solvers run in sublinear time. We extend those solvers and present theoretical guarantees for online learning problems, where adaptivity (i.e., a later step depends on the feedback in previous steps) becomes a unique challenge. We then explicitly characterize the tradeoff between the per-step complexity and regret. For sufficiently large $K$, our algorithms have sublinear per-step complexity and $\tilde O(\sqrt{T})$ regret. Empirically, we evaluate our proposed algorithms in a synthetic environment and a real-world online movie recommendation problem. Our proposed algorithms can deliver a more than 72 times speedup compared to the linear time baselines while retaining similar regret.<br />Comment: Accepted at ICML 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2103.02729
Document Type :
Working Paper