Back to Search Start Over

QvQ-IL: quantity versus quality in incremental learning.

Authors :
Han, Jidong
Zhang, Ting
Liu, Zhaoying
Li, Yujian
Source :
Neural Computing & Applications. Feb2024, Vol. 36 Issue 6, p2767-2796. 30p.
Publication Year :
2024

Abstract

The catastrophic forgetting problem is one of the hotspots in the field of deep learning. At present, there is no doubt that storing samples of previous tasks in fixed-size memory is the best way to solve this problem. However, the number of samples stored in fixed-size memory is limited. With the increase in tasks, the number of samples stored in memory for a single task will decrease sharply. It is also difficult to balance the memory capacity and number of samples. To solve this problem, some methods use a fixed size of memory to store dimensionality reduction images. However, this will create new problems. 1) The quality of dimensionality reduction images is poor, and they are significantly different from original images. 2) How to choose the dimensionality reduction method of images. To address these problem, we put forward a new method. Firstly, we employ a simple and reliable scheme to solve the domain difference between dimensionality reduction images and original images. And we theoretically analyzed which image dimensionality reduction method is better. Secondly, to increase the generalization ability of our method and further mitigate the catastrophic forgetting phenomenon, we utilize a self-supervised image augmentation method and the output features similarity loss. Thirdly, we make use of the neural kernel mapping support vector machine theory to improve the interpretability of our method. Experimental results demonstrated that the top-1 average accuracy of our method is much higher than other methods when using the same size of memory. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09410643
Volume :
36
Issue :
6
Database :
Academic Search Index
Journal :
Neural Computing & Applications
Publication Type :
Academic Journal
Accession number :
174971241
Full Text :
https://doi.org/10.1007/s00521-023-09129-0