Back to Search Start Over

Focus On What Matters: Separated Models For Visual-Based RL Generalization

Authors :
Zhang, Di
Lv, Bowen
Zhang, Hai
Yang, Feifan
Zhao, Junqiao
Yu, Hang
Huang, Chang
Zhou, Hongtu
Ye, Chen
Jiang, Changjun
Publication Year :
2024

Abstract

A primary challenge for visual-based Reinforcement Learning (RL) is to generalize effectively across unseen environments. Although previous studies have explored different auxiliary tasks to enhance generalization, few adopt image reconstruction due to concerns about exacerbating overfitting to task-irrelevant features during training. Perceiving the pre-eminence of image reconstruction in representation learning, we propose SMG (Separated Models for Generalization), a novel approach that exploits image reconstruction for generalization. SMG introduces two model branches to extract task-relevant and task-irrelevant representations separately from visual observations via cooperatively reconstruction. Built upon this architecture, we further emphasize the importance of task-relevant features for generalization. Specifically, SMG incorporates two additional consistency losses to guide the agent's focus toward task-relevant areas across different scenarios, thereby achieving free from overfitting. Extensive experiments in DMC demonstrate the SOTA performance of SMG in generalization, particularly excelling in video-background settings. Evaluations on robotic manipulation tasks further confirm the robustness of SMG in real-world applications.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.10834
Document Type :
Working Paper