1. How Good is Your Wikipedia?
- Author
-
Tatariya, Kushal, Kulmizev, Artur, Poelman, Wessel, Ploeger, Esther, Bollmann, Marcel, Bjerva, Johannes, Luo, Jiaming, Lent, Heather, and de Lhoneux, Miryam
- Subjects
Computer Science - Computation and Language - Abstract
Wikipedia's perceived high quality and broad language coverage have established it as a fundamental resource in multilingual NLP. In the context of low-resource languages, however, these quality assumptions are increasingly being scrutinised. This paper critically examines the data quality of Wikipedia in a non-English setting by subjecting it to various quality filtering techniques, revealing widespread issues such as a high percentage of one-line articles and duplicate articles. We evaluate the downstream impact of quality filtering on Wikipedia and find that data quality pruning is an effective means for resource-efficient training without hurting performance, especially for low-resource languages. Moreover, we advocate for a shift in perspective from seeking a general definition of data quality towards a more language- and task-specific one. Ultimately, we aim for this study to serve as a guide to using Wikipedia for pretraining in a multilingual setting.
- Published
- 2024