Back to Search Start Over

Mining Knowledge for Natural Language Inference from Wikipedia Categories

Authors :
Chen, Mingda
Chu, Zewei
Stratos, Karl
Gimpel, Kevin
Publication Year :
2020

Abstract

Accurate lexical entailment (LE) and natural language inference (NLI) often require large quantities of costly annotations. To alleviate the need for labeled data, we introduce WikiNLI: a resource for improving model performance on NLI and LE tasks. It contains 428,899 pairs of phrases constructed from naturally annotated category hierarchies in Wikipedia. We show that we can improve strong baselines such as BERT and RoBERTa by pretraining them on WikiNLI and transferring the models on downstream tasks. We conduct systematic comparisons with phrases extracted from other knowledge bases such as WordNet and Wikidata to find that pretraining on WikiNLI gives the best performance. In addition, we construct WikiNLI in other languages, and show that pretraining on them improves performance on NLI tasks of corresponding languages.<br />Comment: Findings of EMNLP 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2010.01239
Document Type :
Working Paper