Back to Search Start Over

Deep Reinforcement Learning from Hierarchical Preference Design

Authors :
Bukharin, Alexander
Li, Yixiao
He, Pengcheng
Zhao, Tuo
Publication Year :
2023

Abstract

Reward design is a fundamental, yet challenging aspect of reinforcement learning (RL). Researchers typically utilize feedback signals from the environment to handcraft a reward function, but this process is not always effective due to the varying scale and intricate dependencies of the feedback signals. This paper shows by exploiting certain structures, one can ease the reward design process. Specifically, we propose a hierarchical reward modeling framework -- HERON for scenarios: (I) The feedback signals naturally present hierarchy; (II) The reward is sparse, but with less important surrogate feedback to help policy learning. Both scenarios allow us to design a hierarchical decision tree induced by the importance ranking of the feedback signals to compare RL trajectories. With such preference data, we can then train a reward model for policy learning. We apply HERON to several RL applications, and we find that our framework can not only train high performing agents on a variety of difficult tasks, but also provide additional benefits such as improved sample efficiency and robustness. Our code is available at \url{https://github.com/abukharin3/HERON}.<br />Comment: 28 Pages, 14 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.02632
Document Type :
Working Paper