Back to Search
Start Over
Map-based Experience Replay: A Memory-Efficient Solution to Catastrophic Forgetting in Reinforcement Learning
- Source :
- Frontiers in Neurorobotics 17:1127642 (2023)
- Publication Year :
- 2023
-
Abstract
- Deep Reinforcement Learning agents often suffer from catastrophic forgetting, forgetting previously found solutions in parts of the input space when training on new data. Replay Memories are a common solution to the problem, decorrelating and shuffling old and new training samples. They naively store state transitions as they come in, without regard for redundancy. We introduce a novel cognitive-inspired replay memory approach based on the Grow-When-Required (GWR) self-organizing network, which resembles a map-based mental model of the world. Our approach organizes stored transitions into a concise environment-model-like network of state-nodes and transition-edges, merging similar samples to reduce the memory size and increase pair-wise distance among samples, which increases the relevancy of each sample. Overall, our paper shows that map-based experience replay allows for significant memory reduction with only small performance decreases.
Details
- Database :
- arXiv
- Journal :
- Frontiers in Neurorobotics 17:1127642 (2023)
- Publication Type :
- Report
- Accession number :
- edsarx.2305.02054
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.3389/fnbot.2023.1127642