1. Random Replaying Consolidated Knowledge in the Continual Learning Model
- Author
-
Wang, Guanglu, Liu, Xinyue, Liang, Wenxin, Zong, Linlin, and Zhang, XianChao
- Subjects
Artificial Intelligence ,Machine learning ,Memory ,Experience sampling ,Neural Networks - Abstract
A continual learning (CL) model is designed to solve the catastrophic forgetting problem, which damages the performance of neural networks by overwriting previous knowledge with new knowledge. The fundamental cause of this problem is that previous data is not available when training new data in the CL setting. The memory-based CL methods leverage a memory buffer to address this problem by storing a limited subset of previous data for replay, and most methods of this type adopt random storage and replay strategies. In the human brain, the hippocampus replays consolidated knowledge from the neocortex in a random manner, e.g., random dreaming. Inspired by this memory mechanism, we propose a memory-based method, which replays more consolidated memory data while maintaining the randomness. Our work highlights that random replaying is important for the CL model, which confirms the effectiveness of random dreaming in the human brain.
- Published
- 2024