Back to Search
Start Over
Gradient Agreement Hinders the Memorization of Noisy Labels.
- Source :
- Applied Sciences (2076-3417); Feb2023, Vol. 13 Issue 3, p1823, 15p
- Publication Year :
- 2023
-
Abstract
- The performance of deep neural networks (DNNs) critically relies on high-quality annotations, while training DNNs with noisy labels remains challenging owing to their incredible capacity to memorize the entire training set. In this work, we use two synchronously trained networks to reveal that noisy labels may result in more divergent gradients when updating the parameters. To overcome this, we propose a novel co-training framework named gradient agreement learning (GAL). By dynamically evaluating the gradient agreement coefficient of every pair of parameters from two identical DNNs to determine whether to update them in the training process. GAL can effectively hinder the memorization of noisy labels. Furthermore, we utilize the pseudo labels produced by the two DNNs as the supervision for the training of another network, thereby gaining further improvement by correcting some noisy labels while overcoming the confirmation bias. Extensive experiments on various benchmark datasets demonstrate the superiority of the proposed GAL. [ABSTRACT FROM AUTHOR]
- Subjects :
- CONFIRMATION bias
MEMORIZATION
Subjects
Details
- Language :
- English
- ISSN :
- 20763417
- Volume :
- 13
- Issue :
- 3
- Database :
- Complementary Index
- Journal :
- Applied Sciences (2076-3417)
- Publication Type :
- Academic Journal
- Accession number :
- 161819598
- Full Text :
- https://doi.org/10.3390/app13031823