Back to Search Start Over

SCOPE-RL: A Python Library for Offline Reinforcement Learning and Off-Policy Evaluation

Authors :
Kiyohara, Haruka
Kishimoto, Ren
Kawakami, Kosuke
Kobayashi, Ken
Nakata, Kazuhide
Saito, Yuta
Publication Year :
2023

Abstract

This paper introduces SCOPE-RL, a comprehensive open-source Python software designed for offline reinforcement learning (offline RL), off-policy evaluation (OPE), and selection (OPS). Unlike most existing libraries that focus solely on either policy learning or evaluation, SCOPE-RL seamlessly integrates these two key aspects, facilitating flexible and complete implementations of both offline RL and OPE processes. SCOPE-RL put particular emphasis on its OPE modules, offering a range of OPE estimators and robust evaluation-of-OPE protocols. This approach enables more in-depth and reliable OPE compared to other packages. For instance, SCOPE-RL enhances OPE by estimating the entire reward distribution under a policy rather than its mere point-wise expected value. Additionally, SCOPE-RL provides a more thorough evaluation-of-OPE by presenting the risk-return tradeoff in OPE results, extending beyond mere accuracy evaluations in existing OPE literature. SCOPE-RL is designed with user accessibility in mind. Its user-friendly APIs, comprehensive documentation, and a variety of easy-to-follow examples assist researchers and practitioners in efficiently implementing and experimenting with various offline RL methods and OPE estimators, tailored to their specific problem contexts. The documentation of SCOPE-RL is available at https://scope-rl.readthedocs.io/en/latest/.<br />Comment: preprint, open-source software: https://github.com/hakuhodo-technologies/scope-rl

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.18206
Document Type :
Working Paper