Back to Search Start Over

Reachability Constrained Reinforcement Learning

Authors :
Yu, Dongjie
Ma, Haitong
Li, Shengbo Eben
Chen, Jianyu
Yu, Dongjie
Ma, Haitong
Li, Shengbo Eben
Chen, Jianyu
Publication Year :
2022

Abstract

Constrained reinforcement learning (CRL) has gained significant interest recently, since safety constraints satisfaction is critical for real-world problems. However, existing CRL methods constraining discounted cumulative costs generally lack rigorous definition and guarantee of safety. In contrast, in the safe control research, safety is defined as persistently satisfying certain state constraints. Such persistent safety is possible only on a subset of the state space, called feasible set, where an optimal largest feasible set exists for a given environment. Recent studies incorporate feasible sets into CRL with energy-based methods such as control barrier function (CBF), safety index (SI), and leverage prior conservative estimations of feasible sets, which harms the performance of the learned policy. To deal with this problem, this paper proposes the reachability CRL (RCRL) method by using reachability analysis to establish the novel self-consistency condition and characterize the feasible sets. The feasible sets are represented by the safety value function, which is used as the constraint in CRL. We use the multi-time scale stochastic approximation theory to prove that the proposed algorithm converges to a local optimum, where the largest feasible set can be guaranteed. Empirical results on different benchmarks validate the learned feasible set, the policy performance, and constraint satisfaction of RCRL, compared to CRL and safe control baselines.<br />Comment: Accepted by ICML 2022

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1333770880
Document Type :
Electronic Resource