Back to Search
Start Over
Distilled Meta-learning for Multi-Class Incremental Learning.
- Source :
- ACM Transactions on Multimedia Computing, Communications & Applications; Jul2023, Vol. 19 Issue 4, p1-16, 16p
- Publication Year :
- 2023
-
Abstract
- Meta-learning approaches have recently achieved promising performance in multi-class incremental learning. However, meta-learners still suffer from catastrophic forgetting, i.e., they tend to forget the learned knowledge from the old tasks when they focus on rapidly adapting to the new classes of the current task. To solve this problem, we propose a novel distilled meta-learning (DML) framework for multi-class incremental learning that integrates seamlessly meta-learning with knowledge distillation in each incremental stage. Specifically, during inner-loop training, knowledge distillation is incorporated into the DML to overcome catastrophic forgetting. During outer-loop training, a meta-update rule is designed for the meta-learner to learn across tasks and quickly adapt to new tasks. By virtue of the bilevel optimization, our model is encouraged to reach a balance between the retention of old knowledge and the learning of new knowledge. Experimental results on four benchmark datasets demonstrate the effectiveness of our proposal and show that our method significantly outperforms other state-of-the-art incremental learning methods. [ABSTRACT FROM AUTHOR]
- Subjects :
- MACHINE learning
BILEVEL programming
RIGHT to be forgotten
PROBLEM solving
Subjects
Details
- Language :
- English
- ISSN :
- 15516857
- Volume :
- 19
- Issue :
- 4
- Database :
- Complementary Index
- Journal :
- ACM Transactions on Multimedia Computing, Communications & Applications
- Publication Type :
- Academic Journal
- Accession number :
- 164443556
- Full Text :
- https://doi.org/10.1145/3576045