Back to Search Start Over

Self-supervised Preference Optimization: Enhance Your Language Model with Preference Degree Awareness

Authors :
Li, Jian
Huang, Haojing
Zhang, Yujia
Xu, Pengfei
Chen, Xi
Song, Rui
Shi, Lida
Wang, Jingwen
Xu, Hao
Publication Year :
2024

Abstract

Recently, there has been significant interest in replacing the reward model in Reinforcement Learning with Human Feedback (RLHF) methods for Large Language Models (LLMs), such as Direct Preference Optimization (DPO) and its variants. These approaches commonly use a binary cross-entropy mechanism on pairwise samples, i.e., minimizing and maximizing the loss based on preferred or dis-preferred responses, respectively. However, while this training strategy omits the reward model, it also overlooks the varying preference degrees within different responses. We hypothesize that this is a key factor hindering LLMs from sufficiently understanding human preferences. To address this problem, we propose a novel Self-supervised Preference Optimization (SPO) framework, which constructs a self-supervised preference degree loss combined with the alignment loss, thereby helping LLMs improve their ability to understand the degree of preference. Extensive experiments are conducted on two widely used datasets of different tasks. The results demonstrate that SPO can be seamlessly integrated with existing preference optimization methods and significantly boost their performance to achieve state-of-the-art performance. We also conduct detailed analyses to offer comprehensive insights into SPO, which verifies its effectiveness. The code is available at https://github.com/lijian16/SPO.<br />Comment: Accepted at EMNLP 2024 Findings

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.17791
Document Type :
Working Paper