Back to Search Start Over

How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics

Authors :
Cosma, Adrian
Ruseti, Stefan
Dascalu, Mihai
Caragea, Cornelia
Publication Year :
2024

Abstract

Natural Language Inference (NLI) evaluation is crucial for assessing language understanding models; however, popular datasets suffer from systematic spurious correlations that artificially inflate actual model performance. To address this, we propose a method for the automated creation of a challenging test set without relying on the manual construction of artificial and unrealistic examples. We categorize the test set of popular NLI datasets into three difficulty levels by leveraging methods that exploit training dynamics. This categorization significantly reduces spurious correlation measures, with examples labeled as having the highest difficulty showing markedly decreased performance and encompassing more realistic and diverse linguistic phenomena. When our characterization method is applied to the training set, models trained with only a fraction of the data achieve comparable performance to those trained on the full dataset, surpassing other dataset characterization techniques. Our research addresses limitations in NLI dataset construction, providing a more authentic evaluation of model performance with implications for diverse NLU applications.<br />Comment: Accepted at EMNLP 2024 Main Conference

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.03429
Document Type :
Working Paper