1. Learning Robust Reward Machines from Noisy Labels
- Author
-
Parac, Roko, Nodari, Lorenzo, Ardon, Leo, Furelos-Blanco, Daniel, Cerutti, Federico, and Russo, Alessandra
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
This paper presents PROB-IRM, an approach that learns robust reward machines (RMs) for reinforcement learning (RL) agents from noisy execution traces. The key aspect of RM-driven RL is the exploitation of a finite-state machine that decomposes the agent's task into different subtasks. PROB-IRM uses a state-of-the-art inductive logic programming framework robust to noisy examples to learn RMs from noisy traces using the Bayesian posterior degree of beliefs, thus ensuring robustness against inconsistencies. Pivotal for the results is the interleaving between RM learning and policy learning: a new RM is learned whenever the RL agent generates a trace that is believed not to be accepted by the current RM. To speed up the training of the RL agent, PROB-IRM employs a probabilistic formulation of reward shaping that uses the posterior Bayesian beliefs derived from the traces. Our experimental analysis shows that PROB-IRM can learn (potentially imperfect) RMs from noisy traces and exploit them to train an RL agent to solve its tasks successfully. Despite the complexity of learning the RM from noisy traces, agents trained with PROB-IRM perform comparably to agents provided with handcrafted RMs., Comment: Preprint accepted for publication to the 21st International Conference on Principles of Knowledge Representation and Reasoning (KR 2024)
- Published
- 2024