Back to Search Start Over

Constraints as Rewards: Reinforcement Learning for Robots without Reward Functions

Authors :
Ishihara, Yu
Takasugi, Noriaki
Kawakami, Kotaro
Kinoshita, Masaya
Aoyama, Kazumi
Publication Year :
2025

Abstract

Reinforcement learning has become an essential algorithm for generating complex robotic behaviors. However, to learn such behaviors, it is necessary to design a reward function that describes the task, which often consists of multiple objectives that needs to be balanced. This tuning process is known as reward engineering and typically involves extensive trial-and-error. In this paper, to avoid this trial-and-error process, we propose the concept of Constraints as Rewards (CaR). CaR formulates the task objective using multiple constraint functions instead of a reward function and solves a reinforcement learning problem with constraints using the Lagrangian-method. By adopting this approach, different objectives are automatically balanced, because Lagrange multipliers serves as the weights among the objectives. In addition, we will demonstrate that constraints, expressed as inequalities, provide an intuitive interpretation of the optimization target designed for the task. We apply the proposed method to the standing-up motion generation task of a six-wheeled-telescopic-legged robot and demonstrate that the proposed method successfully acquires the target behavior, even though it is challenging to learn with manually designed reward functions.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.04228
Document Type :
Working Paper