Back to Search Start Over

Offline Model-Based Adaptable Policy Learning for Decision-Making in Out-of-Support Regions

Authors :
Chen, Xiong-Hui
Luo, Fan-Ming
Yu, Yang
Li, Qingyang
Qin, Zhiwei
Shang, Wenjie
Ye, Jieping
Source :
IEEE Transactions on Pattern Analysis and Machine Intelligence; December 2023, Vol. 45 Issue: 12 p15260-15274, 15p
Publication Year :
2023

Abstract

In reinforcement learning, a promising direction to avoid online trial-and-error costs is learning from an offline dataset. Current offline reinforcement learning methods commonly learn in the policy space constrained to in-support regions by the offline dataset, in order to ensure the robustness of the outcome policies. Such constraints, however, also limit the potential of the outcome policies. In this paper, to release the potential of offline policy learning, we investigate the decision-making problems in out-of-support regions directly and propose offline Model-based Adaptable Policy LEarning (MAPLE). By this approach, instead of learning in in-support regions, we learn an adaptable policy that can adapt its behavior in out-of-support regions when deployed. We give a practical implementation of MAPLE via meta-learning techniques and ensemble model learning techniques. We conduct experiments on MuJoCo locomotion tasks with offline datasets. The results show that the proposed method can make robust decisions in out-of-support regions and achieve better performance than SOTA algorithms.

Details

Language :
English
ISSN :
01628828
Volume :
45
Issue :
12
Database :
Supplemental Index
Journal :
IEEE Transactions on Pattern Analysis and Machine Intelligence
Publication Type :
Periodical
Accession number :
ejs64449772
Full Text :
https://doi.org/10.1109/TPAMI.2023.3317131