Back to Search Start Over

KTO: Model Alignment as Prospect Theoretic Optimization

Authors :
Ethayarajh, Kawin
Xu, Winnie
Muennighoff, Niklas
Jurafsky, Dan
Kiela, Douwe
Publication Year :
2024

Abstract

Kahneman & Tversky's $\textit{prospect theory}$ tells us that humans perceive random variables in a biased but well-defined manner (1992); for example, humans are famously loss-averse. We show that objectives for aligning LLMs with human feedback implicitly incorporate many of these biases -- the success of these objectives (e.g., DPO) over cross-entropy minimization can partly be ascribed to them belonging to a family of loss functions that we call $\textit{human-aware losses}$ (HALOs). However, the utility functions these methods attribute to humans still differ from those in the prospect theory literature. Using a Kahneman-Tversky model of human utility, we propose a HALO that directly maximizes the utility of generations instead of maximizing the log-likelihood of preferences, as current methods do. We call this approach KTO, and it matches or exceeds the performance of preference-based methods at scales from 1B to 30B, despite only learning from a binary signal of whether an output is desirable. More broadly, our work suggests that there is no one HALO that is universally superior; the best loss depends on the inductive biases most appropriate for a given setting, an oft-overlooked consideration.<br />Comment: ICML 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.01306
Document Type :
Working Paper