Back to Search
Start Over
Supplementing Gradient-Based Reinforcement Learning with Simple Evolutionary Ideas
- Publication Year :
- 2023
-
Abstract
- We present a simple, sample-efficient algorithm for introducing large but directed learning steps in reinforcement learning (RL), through the use of evolutionary operators. The methodology uses a population of RL agents training with a common experience buffer, with occasional crossovers and mutations of the agents in order to search efficiently through the policy space. Unlike prior literature on combining evolutionary search (ES) with RL, this work does not generate a distribution of agents from a common mean and covariance matrix. Neither does it require the evaluation of the entire population of policies at every time step. Instead, we focus on gradient-based training throughout the life of every policy (individual), with a sparse amount of evolutionary exploration. The resulting algorithm is shown to be robust to hyperparameter variations. As a surprising corollary, we show that simply initialising and training multiple RL agents with a common memory (with no further evolutionary updates) outperforms several standard RL baselines.<br />Comment: 17 pages
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2305.07571
- Document Type :
- Working Paper