Back to Search Start Over

Democratizing Reward Design for Personal and Representative Value-Alignment

Authors :
Blair, Carter
Larson, Kate
Law, Edith
Publication Year :
2024

Abstract

Aligning AI agents with human values is challenging due to diverse and subjective notions of values. Standard alignment methods often aggregate crowd feedback, which can result in the suppression of unique or minority preferences. We introduce Interactive-Reflective Dialogue Alignment, a method that iteratively engages users in reflecting on and specifying their subjective value definitions. This system learns individual value definitions through language-model-based preference elicitation and constructs personalized reward models that can be used to align AI behaviour. We evaluated our system through two studies with 30 participants, one focusing on "respect" and the other on ethical decision-making in autonomous vehicles. Our findings demonstrate diverse definitions of value-aligned behaviour and show that our system can accurately capture each person's unique understanding. This approach enables personalized alignment and can inform more representative and interpretable collective alignment strategies.<br />Comment: 19 pages, 16 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.22203
Document Type :
Working Paper