Back to Search Start Over

PersonalLLM: Tailoring LLMs to Individual Preferences

Authors :
Zollo, Thomas P.
Siah, Andrew Wei Tung
Ye, Naimeng
Li, Ang
Namkoong, Hongseok
Publication Year :
2024

Abstract

As LLMs become capable of complex tasks, there is growing potential for personalized interactions tailored to the subtle and idiosyncratic preferences of the user. We present a public benchmark, PersonalLLM, focusing on adapting LLMs to provide maximal benefits for a particular user. Departing from existing alignment benchmarks that implicitly assume uniform preferences, we curate open-ended prompts paired with many high-quality answers over which users would be expected to display heterogeneous latent preferences. Instead of persona-prompting LLMs based on high-level attributes (e.g., user's race or response length), which yields homogeneous preferences relative to humans, we develop a method that can simulate a large user base with diverse preferences from a set of pre-trained reward models. Our dataset and generated personalities offer an innovative testbed for developing personalization algorithms that grapple with continual data sparsity--few relevant feedback from the particular user--by leveraging historical data from other (similar) users. We explore basic in-context learning and meta-learning baselines to illustrate the utility of PersonalLLM and highlight the need for future methodological development. Our dataset is available at https://huggingface.co/datasets/namkoong-lab/PersonalLLM<br />Comment: 28 pages, 6 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.20296
Document Type :
Working Paper