Back to Search Start Over

Incentivized Exploration of Non-Stationary Stochastic Bandits

Authors :
Chakraborty, Sourav
Chen, Lijun
Publication Year :
2024

Abstract

We study incentivized exploration for the multi-armed bandit (MAB) problem with non-stationary reward distributions, where players receive compensation for exploring arms other than the greedy choice and may provide biased feedback on the reward. We consider two different non-stationary environments: abruptly-changing and continuously-changing, and propose respective incentivized exploration algorithms. We show that the proposed algorithms achieve sublinear regret and compensation over time, thus effectively incentivizing exploration despite the nonstationarity and the biased or drifted feedback.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.10819
Document Type :
Working Paper