Back to Search Start Over

New Insights on Reducing Abrupt Representation Change in Online Continual Learning

Authors :
Caccia, Lucas
Aljundi, Rahaf
Asadi, Nader
Tuytelaars, Tinne
Pineau, Joelle
Belilovsky, Eugene
Publication Year :
2022

Abstract

In the online continual learning paradigm, agents must learn from a changing distribution while respecting memory and compute constraints. Experience Replay (ER), where a small subset of past data is stored and replayed alongside new data, has emerged as a simple and effective learning strategy. In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones. We shed new light on this question by showing that applying ER causes the newly added classes' representations to overlap significantly with the previous classes, leading to highly disruptive parameter updates. Based on this empirical analysis, we propose a new method which mitigates this issue by shielding the learned representations from drastic adaptation to accommodate new classes. We show that using an asymmetric update rule pushes new classes to adapt to the older ones (rather than the reverse), which is more effective especially at task boundaries, where much of the forgetting typically occurs. Empirical results show significant gains over strong baselines on standard continual learning benchmarks<br />Comment: This has been withdrawn as it is a new version of arXiv:2104.05025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2203.03798
Document Type :
Working Paper