1. Double Actor-Critic with TD Error-Driven Regularization in Reinforcement Learning
- Author
-
Chen, Haohui, Chen, Zhiyong, Liu, Aoxiang, and Fang, Wentuo
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
To obtain better value estimation in reinforcement learning, we propose a novel algorithm based on the double actor-critic framework with temporal difference error-driven regularization, abbreviated as TDDR. TDDR employs double actors, with each actor paired with a critic, thereby fully leveraging the advantages of double critics. Additionally, TDDR introduces an innovative critic regularization architecture. Compared to classical deterministic policy gradient-based algorithms that lack a double actor-critic structure, TDDR provides superior estimation. Moreover, unlike existing algorithms with double actor-critic frameworks, TDDR does not introduce any additional hyperparameters, significantly simplifying the design and implementation process. Experiments demonstrate that TDDR exhibits strong competitiveness compared to benchmark algorithms in challenging continuous control tasks.
- Published
- 2024