Back to Search Start Over

Provable Offline Preference-Based Reinforcement Learning

Authors :
Zhan, Wenhao
Uehara, Masatoshi
Kallus, Nathan
Lee, Jason D.
Sun, Wen
Publication Year :
2023

Abstract

In this paper, we investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback where feedback is available in the form of preference between trajectory pairs rather than explicit rewards. Our proposed algorithm consists of two main steps: (1) estimate the implicit reward using Maximum Likelihood Estimation (MLE) with general function approximation from offline data and (2) solve a distributionally robust planning problem over a confidence set around the MLE. We consider the general reward setting where the reward can be defined over the whole trajectory and provide a novel guarantee that allows us to learn any target policy with a polynomial number of samples, as long as the target policy is covered by the offline data. This guarantee is the first of its kind with general function approximation. To measure the coverage of the target policy, we introduce a new single-policy concentrability coefficient, which can be upper bounded by the per-trajectory concentrability coefficient. We also establish lower bounds that highlight the necessity of such concentrability and the difference from standard RL, where state-action-wise rewards are directly observed. We further extend and analyze our algorithm when the feedback is given over action pairs.<br />Comment: The first two authors contribute equally

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.14816
Document Type :
Working Paper