Back to Search Start Over

Kernelized Offline Contextual Dueling Bandits

Authors :
Mehta, Viraj
Neopane, Ojash
Das, Vikramjeet
Lin, Sen
Schneider, Jeff
Neiswanger, Willie
Publication Year :
2023

Abstract

Preference-based feedback is important for many applications where direct evaluation of a reward function is not feasible. A notable recent example arises in reinforcement learning from human feedback on large language models. For many of these applications, the cost of acquiring the human feedback can be substantial or even prohibitive. In this work, we take advantage of the fact that often the agent can choose contexts at which to obtain human feedback in order to most efficiently identify a good policy, and introduce the offline contextual dueling bandit setting. We give an upper-confidence-bound style algorithm for this setting and prove a regret bound. We also give empirical confirmation that this method outperforms a similar strategy that uses uniformly sampled contexts.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2307.11288
Document Type :
Working Paper