Back to Search Start Over

Episodic Novelty Through Temporal Distance

Authors :
Jiang, Yuhua
Liu, Qihan
Yang, Yiqin
Ma, Xiaoteng
Zhong, Dianyu
Hu, Hao
Yang, Jun
Liang, Bin
Xu, Bo
Zhang, Chongjie
Zhao, Qianchuan
Publication Year :
2025

Abstract

Exploration in sparse reward environments remains a significant challenge in reinforcement learning, particularly in Contextual Markov Decision Processes (CMDPs), where environments differ across episodes. Existing episodic intrinsic motivation methods for CMDPs primarily rely on count-based approaches, which are ineffective in large state spaces, or on similarity-based methods that lack appropriate metrics for state comparison. To address these shortcomings, we propose Episodic Novelty Through Temporal Distance (ETD), a novel approach that introduces temporal distance as a robust metric for state similarity and intrinsic reward computation. By employing contrastive learning, ETD accurately estimates temporal distances and derives intrinsic rewards based on the novelty of states within the current episode. Extensive experiments on various benchmark tasks demonstrate that ETD significantly outperforms state-of-the-art methods, highlighting its effectiveness in enhancing exploration in sparse reward CMDPs.<br />Comment: ICLR2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.15418
Document Type :
Working Paper