Back to Search
Start Over
Improved Offline Contextual Bandits with Second-Order Bounds: Betting and Freezing
- Publication Year :
- 2025
-
Abstract
- We consider the off-policy selection and learning in contextual bandits where the learner aims to select or train a reward-maximizing policy using data collected by a fixed behavior policy. Our contribution is two-fold. First, we propose a novel off-policy selection method that leverages a new betting-based confidence bound applied to an inverse propensity weight sequence. Our theoretical analysis reveals that our method achieves a significantly better, variance-adaptive guarantee upon prior art. Second, we propose a novel and generic condition on the optimization objective for off-policy learning that strikes a difference balance in bias and variance. One special case that we call freezing tends to induce small variance, which is preferred in small-data regimes. Our analysis shows that they match the best existing guarantee. In our empirical study, our selection method outperforms existing methods, and freezing exhibits improved performance in small-sample regimes.<br />Comment: 36 pages, 8 figures
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2502.10826
- Document Type :
- Working Paper