Back to Search Start Over

DESTA: A Framework for Safe Reinforcement Learning with Markov Games of Intervention

Authors :
Mguni, David
Islam, Usman
Sun, Yaqi
Zhang, Xiuling
Jennings, Joel
Sootla, Aivar
Yu, Changmin
Wang, Ziyan
Wang, Jun
Yang, Yaodong
Publication Year :
2021

Abstract

Reinforcement learning (RL) involves performing exploratory actions in an unknown system. This can place a learning agent in dangerous and potentially catastrophic system states. Current approaches for tackling safe learning in RL simultaneously trade-off safe exploration and task fulfillment. In this paper, we introduce a new generation of RL solvers that learn to minimise safety violations while maximising the task reward to the extent that can be tolerated by the safe policy. Our approach introduces a novel two-player framework for safe RL called Distributive Exploration Safety Training Algorithm (DESTA). The core of DESTA is a game between two adaptive agents: Safety Agent that is delegated the task of minimising safety violations and Task Agent whose goal is to maximise the environment reward. Specifically, Safety Agent can selectively take control of the system at any given point to prevent safety violations while Task Agent is free to execute its policy at any other states. This framework enables Safety Agent to learn to take actions at certain states that minimise future safety violations, both during training and testing time, while Task Agent performs actions that maximise the task performance everywhere else. Theoretically, we prove that DESTA converges to stable points enabling safety violations of pretrained policies to be minimised. Empirically, we show DESTA's ability to augment the safety of existing policies and secondly, construct safe RL policies when the Task Agent and Safety Agent are trained concurrently. We demonstrate DESTA's superior performance against leading RL methods in Lunar Lander and Frozen Lake from OpenAI gym.<br />Comment: arXiv admin note: text overlap with arXiv:2103.09159

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2110.14468
Document Type :
Working Paper