Back to Search Start Over

Conceptual Reinforcement Learning for Language-Conditioned Tasks

Authors :
Peng, Shaohui
Hu, Xing
Zhang, Rui
Guo, Jiaming
Yi, Qi
Chen, Ruizhi
Du, Zidong
Li, Ling
Guo, Qi
Chen, Yunji
Publication Year :
2023

Abstract

Despite the broad application of deep reinforcement learning (RL), transferring and adapting the policy to unseen but similar environments is still a significant challenge. Recently, the language-conditioned policy is proposed to facilitate policy transfer through learning the joint representation of observation and text that catches the compact and invariant information across environments. Existing studies of language-conditioned RL methods often learn the joint representation as a simple latent layer for the given instances (episode-specific observation and text), which inevitably includes noisy or irrelevant information and cause spurious correlations that are dependent on instances, thus hurting generalization performance and training efficiency. To address this issue, we propose a conceptual reinforcement learning (CRL) framework to learn the concept-like joint representation for language-conditioned policy. The key insight is that concepts are compact and invariant representations in human cognition through extracting similarities from numerous instances in real-world. In CRL, we propose a multi-level attention encoder and two mutual information constraints for learning compact and invariant concepts. Verified in two challenging environments, RTFM and Messenger, CRL significantly improves the training efficiency (up to 70%) and generalization ability (up to 30%) to the new environment dynamics.<br />Comment: Accepted by AAAI 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2303.05069
Document Type :
Working Paper