1. Unsupervised Replay Strategies for Continual Learning with Limited Data
- Author
-
Bazhenov, Anthony, Dewasurendra, Pahan, Krishnan, Giri P., and Delanois, Jean Erik
- Subjects
Computer Science - Machine Learning - Abstract
Artificial neural networks (ANNs) show limited performance with scarce or imbalanced training data and face challenges with continuous learning, such as forgetting previously learned data after new tasks training. In contrast, the human brain can learn continuously and from just a few examples. This research explores the impact of 'sleep', an unsupervised phase incorporating stochastic activation with local Hebbian learning rules, on ANNs trained incrementally with limited and imbalanced datasets, specifically MNIST and Fashion MNIST. We discovered that introducing a sleep phase significantly enhanced accuracy in models trained with limited data. When a few tasks were trained sequentially, sleep replay not only rescued previously learned information that had been catastrophically forgetting following new task training but often enhanced performance in prior tasks, especially those trained with limited data. This study highlights the multifaceted role of sleep replay in augmenting learning efficiency and facilitating continual learning in ANNs.
- Published
- 2024
- Full Text
- View/download PDF