Back to Search
Start Over
Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
- Publication Year :
- 2021
-
Abstract
- Recent advances in reinforcement learning (RL) have led to a growing interest in applying RL to classical planning domains or applying classical planning methods to some complex RL domains. However, the long-horizon goal-based problems found in classical planning lead to sparse rewards for RL, making direct application inefficient. In this paper, we propose to leverage domain-independent heuristic functions commonly used in the classical planning literature to improve the sample efficiency of RL. These classical heuristics act as dense reward generators to alleviate the sparse-rewards issue and enable our RL agent to learn domain-specific value functions as residuals on these heuristics, making learning easier. Correct application of this technique requires consolidating the discounted metric used in RL and the non-discounted metric used in heuristics. We implement the value functions using Neural Logic Machines, a neural network architecture designed for grounded first-order logic inputs. We demonstrate on several classical planning domains that using classical heuristics for RL allows for good sample efficiency compared to sparse-reward RL. We further show that our learned value functions generalize to novel problem instances in the same domain.<br />Comment: Equal contributions by the first two authors. This manuscript is a camera-ready version accepted in ICAPS-2022. It is significantly updated from past versions (e.g., in the ICAPS PRL (Planning and RL) workshop) with additional experiments comparing existing work (STRIPS-HGN (Shen, Trevizan, and Thiebaux 2020) and GBFS-GNN (Rivlin, Hazan, and Karpas 2019))
Details
- Database :
- OAIster
- Publication Type :
- Electronic Resource
- Accession number :
- edsoai.on1333721551
- Document Type :
- Electronic Resource