Back to Search Start Over

Less Learn Shortcut: Analyzing and Mitigating Learning of Spurious Feature-Label Correlation

Authors :
Du, Yanrui
Yan, Jing
Chen, Yan
Liu, Jing
Zhao, Sendong
She, Qiaoqiao
Wu, Hua
Wang, Haifeng
Qin, Bing
Publication Year :
2022

Abstract

Recent research has revealed that deep neural networks often take dataset biases as a shortcut to make decisions rather than understand tasks, leading to failures in real-world applications. In this study, we focus on the spurious correlation between word features and labels that models learn from the biased data distribution of training data. In particular, we define the word highly co-occurring with a specific label as biased word, and the example containing biased word as biased example. Our analysis shows that biased examples are easier for models to learn, while at the time of prediction, biased words make a significantly higher contribution to the models' predictions, and models tend to assign predicted labels over-relying on the spurious correlation between words and labels. To mitigate models' over-reliance on the shortcut (i.e. spurious correlation), we propose a training strategy Less-Learn-Shortcut (LLS): our strategy quantifies the biased degree of the biased examples and down-weights them accordingly. Experimental results on Question Matching, Natural Language Inference and Sentiment Analysis tasks show that LLS is a task-agnostic strategy and can improve the model performance on adversarial data while maintaining good performance on in-domain data.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2205.12593
Document Type :
Working Paper