Back to Search
Start Over
Asymptotic Randomised Control with applications to bandits
- Publication Year :
- 2020
-
Abstract
- We consider a general multi-armed bandit problem with correlated (and simple contextual and restless) elements, as a relaxed control problem. By introducing an entropy regularisation, we obtain a smooth asymptotic approximation to the value function. This yields a novel semi-index approximation of the optimal decision process. This semi-index can be interpreted as explicitly balancing an exploration-exploitation trade-off as in the optimistic (UCB) principle where the learning premium explicitly describes asymmetry of information available in the environment and non-linearity in the reward function. Performance of the resulting Asymptotic Randomised Control (ARC) algorithm compares favourably well with other approaches to correlated multi-armed bandits.
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2010.07252
- Document Type :
- Working Paper