Back to Search Start Over

Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification

Authors :
Gajcin, Jasmina
McCarthy, James
Nair, Rahul
Marinescu, Radu
Daly, Elizabeth
Dusparic, Ivana
Publication Year :
2023

Abstract

A well-defined reward function is crucial for successful training of an reinforcement learning (RL) agent. However, defining a suitable reward function is a notoriously challenging task, especially in complex, multi-objective environments. Developers often have to resort to starting with an initial, potentially misspecified reward function, and iteratively adjusting its parameters, based on observed learned behavior. In this work, we aim to automate this process by proposing ITERS, an iterative reward shaping approach using human feedback for mitigating the effects of a misspecified reward function. Our approach allows the user to provide trajectory-level feedback on agent's behavior during training, which can be integrated as a reward shaping signal in the following training iteration. We also allow the user to provide explanations of their feedback, which are used to augment the feedback and reduce user effort and feedback frequency. We evaluate ITERS in three environments and show that it can successfully correct misspecified reward functions.<br />Comment: 7 pages, 2 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.15969
Document Type :
Working Paper