Back to Search Start Over

SAIL: Self-Improving Efficient Online Alignment of Large Language Models

Authors :
Ding, Mucong
Chakraborty, Souradip
Agrawal, Vibhu
Che, Zora
Koppel, Alec
Wang, Mengdi
Bedi, Amrit
Huang, Furong
Publication Year :
2024

Abstract

Reinforcement Learning from Human Feedback (RLHF) is a key method for aligning large language models (LLMs) with human preferences. However, current offline alignment approaches like DPO, IPO, and SLiC rely heavily on fixed preference datasets, which can lead to sub-optimal performance. On the other hand, recent literature has focused on designing online RLHF methods but still lacks a unified conceptual formulation and suffers from distribution shift issues. To address this, we establish that online LLM alignment is underpinned by bilevel optimization. By reducing this formulation to an efficient single-level first-order method (using the reward-policy equivalence), our approach generates new samples and iteratively refines model alignment by exploring responses and regulating preference labels. In doing so, we permit alignment methods to operate in an online and self-improving manner, as well as generalize prior online RLHF methods as special cases. Compared to state-of-the-art iterative RLHF methods, our approach significantly improves alignment performance on open-sourced datasets with minimal computational overhead.<br />Comment: 24 pages, 6 figures, 3 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.15567
Document Type :
Working Paper