Back to Search Start Over

Offline Regularised Reinforcement Learning for Large Language Models Alignment

Authors :
Richemond, Pierre Harvey
Tang, Yunhao
Guo, Daniel
Calandriello, Daniele
Azar, Mohammad Gheshlaghi
Rafailov, Rafael
Pires, Bernardo Avila
Tarassov, Eugene
Spangher, Lucas
Ellsworth, Will
Severyn, Aliaksei
Mallinson, Jonathan
Shani, Lior
Shamir, Gil
Joshi, Rishabh
Liu, Tianqi
Munos, Remi
Piot, Bilal
Publication Year :
2024

Abstract

The dominant framework for alignment of large language models (LLM), whether through reinforcement learning from human feedback or direct preference optimisation, is to learn from preference data. This involves building datasets where each element is a quadruplet composed of a prompt, two independent responses (completions of the prompt) and a human preference between the two independent responses, yielding a preferred and a dis-preferred response. Such data is typically scarce and expensive to collect. On the other hand, \emph{single-trajectory} datasets where each element is a triplet composed of a prompt, a response and a human feedback is naturally more abundant. The canonical element of such datasets is for instance an LLM's response to a user's prompt followed by a user's feedback such as a thumbs-up/down. Consequently, in this work, we propose DRO, or \emph{Direct Reward Optimisation}, as a framework and associated algorithms that do not require pairwise preferences. DRO uses a simple mean-squared objective that can be implemented in various ways. We validate our findings empirically, using T5 encoder-decoder language models, and show DRO's performance over selected baselines such as Kahneman-Tversky Optimization (KTO). Thus, we confirm that DRO is a simple and empirically compelling method for single-trajectory policy optimisation.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.19107
Document Type :
Working Paper