Back to Search Start Over

Leveraging Unlabeled Data to Track Memorization

Authors :
Forouzesh, Mahsa
Sedghi, Hanie
Thiran, Patrick
Publication Year :
2022

Abstract

Deep neural networks may easily memorize noisy labels present in real-world data, which degrades their ability to generalize. It is therefore important to track and evaluate the robustness of models against noisy label memorization. We propose a metric, called susceptibility, to gauge such memorization for neural networks. Susceptibility is simple and easy to compute during training. Moreover, it does not require access to ground-truth labels and it only uses unlabeled data. We empirically show the effectiveness of our metric in tracking memorization on various architectures and datasets and provide theoretical insights into the design of the susceptibility metric. Finally, we show through extensive experiments on datasets with synthetic and real-world label noise that one can utilize susceptibility and the overall training accuracy to distinguish models that maintain a low memorization on the training set and generalize well to unseen clean data.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2212.04461
Document Type :
Working Paper