Back to Search Start Over

Mitigating the Alignment Tax of RLHF

Authors :
Lin, Yong
Lin, Hangyu
Xiong, Wei
Diao, Shizhe
Liu, Jianmeng
Zhang, Jipeng
Pan, Rui
Wang, Haoxiang
Hu, Wenbin
Zhang, Hanning
Dong, Hanze
Pi, Renjie
Zhao, Han
Jiang, Nan
Ji, Heng
Yao, Yuan
Zhang, Tong
Publication Year :
2023

Abstract

LLMs acquire a wide range of abilities during pre-training, but aligning LLMs under Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting pretrained abilities, which is also known as the alignment tax. To investigate alignment tax, we conducted experiments with existing RLHF algorithms using OpenLLaMA-3B, which revealed a pronounced alignment tax in NLP tasks. Whereas, despite various techniques to mitigate forgetting, they are often at odds with the RLHF performance, leading to a trade-off between alignment performance and forgetting mitigation, leading to an alignment-forgetting trade-off. In this paper we show that model averaging, which simply interpolates between pre and post RLHF model weights, surprisingly achieves the most strongest alignment-forgetting Pareto front among a wide range of competing methods. To understand its effectiveness, we offer theoretical insights into model averaging, revealing that it enhances performance Pareto front by increasing feature diversity on the layers where tasks share overlapped feature spaces. Empirical evidence corroborates our analysis by showing the benefits of averaging low-level transformer layers. Building on the analysis and the observation that averaging different layers of the transformer leads to significantly different alignment-forgetting trade-offs, we propose Heterogeneous Model Averaging (HMA) to Heterogeneously find various combination ratios of model layers. HMA seeks to maximize the alignment performance while incurring minimal alignment tax. Moreover, we validate HMA's performance across a range of RLHF algorithms over OpenLLaMA-3B and further extend our findings to Mistral-7B which is evaluated by open-sourced preference model and GPT4. Code available here: https://github.com/avalonstrel/Mitigating-the-Alignment-Tax-of-RLHF.git.<br />Comment: EMNLP 2024 Main

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.06256
Document Type :
Working Paper