Back to Search
Start Over
Learning with noisy labels via logit adjustment based on gradient prior method.
- Source :
- Applied Intelligence; Oct2023, Vol. 53 Issue 20, p24393-24406, 14p
- Publication Year :
- 2023
-
Abstract
- Robust loss functions are crucial for training models with strong generalization capacity in the presence of noisy labels. The commonly used Cross Entropy (CE) loss function tends to overfit noisy labels, while symmetric losses that are robust to label noise are limited by their symmetry conditions. We conduct an analysis of the gradient of CE and identify the main difficulty posed by label noise: the imbalance of gradient norm among samples. Inspired by long-tail learning, we propose a gradient prior (GP)-based logit adjustment method to mitigate the impact of label noise. This method makes full use of the gradient of samples to adjust the logit, enabling DNNs to effectively ignore noisy samples and instead focus more on learning hard samples. Experiments on benchmark datasets demonstrate that our method significantly improves the performance of CE and outperforms existing methods, especially in the case of symmetric noise. Experiments on the object detection dataset Pascal VOC further verify the plug-and-play and effective robustness of our method. [ABSTRACT FROM AUTHOR]
- Subjects :
- NOISE
Subjects
Details
- Language :
- English
- ISSN :
- 0924669X
- Volume :
- 53
- Issue :
- 20
- Database :
- Complementary Index
- Journal :
- Applied Intelligence
- Publication Type :
- Academic Journal
- Accession number :
- 173152410
- Full Text :
- https://doi.org/10.1007/s10489-023-04609-1