Back to Search Start Over

Uniformly Safe RL with Objective Suppression for Multi-Constraint Safety-Critical Applications

Authors :
Zhou, Zihan
Booher, Jonathan
Rohanimanesh, Khashayar
Liu, Wei
Petiushko, Aleksandr
Garg, Animesh
Publication Year :
2024

Abstract

Safe reinforcement learning tasks are a challenging domain despite being very common in the real world. The widely adopted CMDP model constrains the risks in expectation, which makes room for dangerous behaviors in long-tail states. In safety-critical domains, such behaviors could lead to disastrous outcomes. To address this issue, we first describe the problem with a stronger Uniformly Constrained MDP (UCMDP) model where we impose constraints on all reachable states; we then propose Objective Suppression, a novel method that adaptively suppresses the task reward maximizing objectives according to a safety critic, as a solution to the Lagrangian dual of a UCMDP. We benchmark Objective Suppression in two multi-constraint safety domains, including an autonomous driving domain where any incorrect behavior can lead to disastrous consequences. On the driving domain, we evaluate on open source and proprietary data and evaluate transfer to a real autonomous fleet. Empirically, we demonstrate that our proposed method, when combined with existing safe RL algorithms, can match the task reward achieved by baselines with significantly fewer constraint violations.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.15650
Document Type :
Working Paper