Back to Search Start Over

We Need to Talk About Random Splits

We Need to Talk About Random Splits

Authors :
Søgaard, Anders
Ebert, Sebastian Elgaard
Bastings, Jasmijn
Filippova, Katja
Søgaard, Anders
Ebert, Sebastian Elgaard
Bastings, Jasmijn
Filippova, Katja
Source :
Søgaard , A , Ebert , S E , Bastings , J & Filippova , K 2021 , We Need to Talk About Random Splits . in Proceeding of the 2021 Conference of the European Chapter of the Association for Computational Linguistics (EACL) . Association for Computational Linguistics , pp. 1823–1832 .
Publication Year :
2021

Abstract

Gorman and Bedrick (2019) argued for using random splits rather than standard splits in NLP experiments. We argue that random splits, like standard splits, lead to overly optimistic performance estimates. We can also split data in biased or adversarial ways, e.g., training on short sentences and evaluating on long ones. Biased sampling has been used in domain adaptation to simulate real-world drift; this is known as the covariate shift assumption. In NLP, however, even worst-case splits, maximizing bias, often under-estimate the error observed on new samples of in-domain data, i.e., the data that models should minimally generalize to at test time. This invalidates the covariate shift assumption. Instead of using multiple random splits, future benchmarks should ideally include multiple, independent test sets instead; if infeasible, we argue that multiple biased splits leads to more realistic performance estimates than multiple random splits.

Details

Database :
OAIster
Journal :
Søgaard , A , Ebert , S E , Bastings , J & Filippova , K 2021 , We Need to Talk About Random Splits . in Proceeding of the 2021 Conference of the European Chapter of the Association for Computational Linguistics (EACL) . Association for Computational Linguistics , pp. 1823–1832 .
Notes :
application/pdf, English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1322756875
Document Type :
Electronic Resource