Back to Search Start Over

Reachability Verification Based Reliability Assessment for Deep Reinforcement Learning Controlled Robotics and Autonomous Systems

Authors :
Dong, Yi
Zhao, Xingyu
Wang, Sen
Huang, Xiaowei
Publication Year :
2022
Publisher :
arXiv, 2022.

Abstract

Deep Reinforcement Learning (DRL) has achieved impressive performance in robotics and autonomous systems (RASs). A key impediment to its deployment in real-life operations is the spuriously unsafe DRL policies--unexplored states may lead the agent to make wrong decisions that may cause hazards, especially in applications where end-to-end controllers of the RAS were trained by DRL. In this paper, we propose a novel quantitative reliability assessment framework for DRL-controlled RASs, leveraging verification evidence generated from formal reliability analysis of neural networks. A two-level verification framework is introduced to check the safety property with respect to inaccurate observations that are due to, e.g., environmental noises and state changes. Reachability verification tools are leveraged at the local level to generate safety evidence of trajectories, while at the global level, we quantify the overall reliability as an aggregated metric of local safety evidence, according to an operational profile. The effectiveness of the proposed verification framework is demonstrated and validated via experiments on real RASs.<br />Comment: Submitted, under review

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....54b57f09f48533f7ac56bb79c5a43d45
Full Text :
https://doi.org/10.48550/arxiv.2210.14991