Back to Search Start Over

UnLearning from Experience to Avoid Spurious Correlations

Authors :
Mitchell, Jeff
del Rincón, Jesús Martínez
McLaughlin, Niall
Publication Year :
2024

Abstract

While deep neural networks can achieve state-of-the-art performance in many tasks, these models are more fragile than they appear. They are prone to learning spurious correlations in their training data, leading to surprising failure cases. In this paper, we propose a new approach that addresses the issue of spurious correlations: UnLearning from Experience (ULE). Our method is based on using two classification models trained in parallel: student and teacher models. Both models receive the same batches of training data. The student model is trained with no constraints and pursues the spurious correlations in the data. The teacher model is trained to solve the same classification problem while avoiding the mistakes of the student model. As training is done in parallel, the better the student model learns the spurious correlations, the more robust the teacher model becomes. The teacher model uses the gradient of the student's output with respect to its input to unlearn mistakes made by the student. We show that our method is effective on the Waterbirds, CelebA, Spawrious and UrbanCars datasets.<br />Comment: 10 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.02792
Document Type :
Working Paper