Back to Search Start Over

More Bang for Your Buck: Natural Perturbation for Robust Question Answering

Authors :
Khashabi, Daniel
Khot, Tushar
Sabharwal, Ashish
Khashabi, Daniel
Khot, Tushar
Sabharwal, Ashish
Publication Year :
2020

Abstract

While recent models have achieved human-level scores on many NLP datasets, we observe that they are considerably sensitive to small changes in input. As an alternative to the standard approach of addressing this issue by constructing training sets of completely new examples, we propose doing so via minimal perturbation of examples. Specifically, our approach involves first collecting a set of seed examples and then applying human-driven natural perturbations (as opposed to rule-based machine perturbations), which often change the gold label as well. Local perturbations have the advantage of being relatively easier (and hence cheaper) to create than writing out completely new examples. To evaluate the impact of this phenomenon, we consider a recent question-answering dataset (BoolQ) and study the benefit of our approach as a function of the perturbation cost ratio, the relative cost of perturbing an existing question vs. creating a new one from scratch. We find that when natural perturbations are moderately cheaper to create, it is more effective to train models using them: such models exhibit higher robustness and better generalization, while retaining performance on the original BoolQ dataset.<br />Comment: EMNLP 2020

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1228401036
Document Type :
Electronic Resource