Back to Search
Start Over
Adversarial Attacks to Reward Machine-based Reinforcement Learning
- Publication Year :
- 2023
-
Abstract
- In recent years, Reward Machines (RMs) have stood out as a simple yet effective automata-based formalism for exposing and exploiting task structure in reinforcement learning settings. Despite their relevance, little to no attention has been directed to the study of their security implications and robustness to adversarial scenarios, likely due to their recent appearance in the literature. With my thesis, I aim to provide the first analysis of the security of RM-based reinforcement learning techniques, with the hope of motivating further research in the field, and I propose and evaluate a novel class of attacks on RM-based techniques: blinding attacks.<br />Comment: Thesis Supervisor: Prof. Federico Cerutti (Universit\`a degli Studi di Brescia, IT)
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2311.09014
- Document Type :
- Working Paper