Back to Search Start Over

Leveraging Domain-Unlabeled Data in Offline Reinforcement Learning across Two Domains

Authors :
Nishimori, Soichiro
Cai, Xin-Qiang
Ackermann, Johannes
Sugiyama, Masashi
Publication Year :
2024

Abstract

In this paper, we investigate an offline reinforcement learning (RL) problem where datasets are collected from two domains. In this scenario, having datasets with domain labels facilitates efficient policy training. However, in practice, the task of assigning domain labels can be resource-intensive or infeasible at a large scale, leading to a prevalence of domain-unlabeled data. To formalize this challenge, we introduce a novel offline RL problem setting named Positive-Unlabeled Offline RL (PUORL), which incorporates domain-unlabeled data. To address PUORL, we develop an offline RL algorithm utilizing positive-unlabeled learning to predict the domain labels of domain-unlabeled data, enabling the integration of this data into policy training. Our experiments show the effectiveness of our method in accurately identifying domains and learning policies that outperform baselines in the PUORL setting, highlighting its capability to leverage domain-unlabeled data effectively.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.07465
Document Type :
Working Paper