Back to Search
Start Over
KgPLM: Knowledge-guided Language Model Pre-training via Generative and Discriminative Learning
- Publication Year :
- 2020
-
Abstract
- Recent studies on pre-trained language models have demonstrated their ability to capture factual knowledge and applications in knowledge-aware downstream tasks. In this work, we present a language model pre-training framework guided by factual knowledge completion and verification, and use the generative and discriminative approaches cooperatively to learn the model. Particularly, we investigate two learning schemes, named two-tower scheme and pipeline scheme, in training the generator and discriminator with shared parameter. Experimental results on LAMA, a set of zero-shot cloze-style question answering tasks, show that our model contains richer factual knowledge than the conventional pre-trained language models. Furthermore, when fine-tuned and evaluated on the MRQA shared tasks which consists of several machine reading comprehension datasets, our model achieves the state-of-the-art performance, and gains large improvements on NewsQA (+1.26 F1) and TriviaQA (+1.56 F1) over RoBERTa.<br />Comment: 10 pages, 3 figures
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2012.03551
- Document Type :
- Working Paper