Back to Search
Start Over
SChuBERT: Scholarly Document Chunks with BERT-encoding boost Citation Count Prediction
- Source :
- Proceedings of the First Workshop on Scholarly Document Processing. Association for Computational Linguistics. (2020) 158-167. EMNLP|SDP 2020 https://www.aclweb.org/anthology/2020.sdp-1.17/
- Publication Year :
- 2020
-
Abstract
- Predicting the number of citations of scholarly documents is an upcoming task in scholarly document processing. Besides the intrinsic merit of this information, it also has a wider use as an imperfect proxy for quality which has the advantage of being cheaply available for large volumes of scholarly documents. Previous work has dealt with number of citations prediction with relatively small training data sets, or larger datasets but with short, incomplete input text. In this work we leverage the open access ACL Anthology collection in combination with the Semantic Scholar bibliometric database to create a large corpus of scholarly documents with associated citation information and we propose a new citation prediction model called SChuBERT. In our experiments we compare SChuBERT with several state-of-the-art citation prediction models and show that it outperforms previous methods by a large margin. We also show the merit of using more training data and longer input for number of citations prediction.<br />Comment: Published at the First Workshop on Scholarly Document Processing, at EMNLP 2020. Minor corrections were made to the workshop version, including addition of color to Figures 1,2
Details
- Database :
- arXiv
- Journal :
- Proceedings of the First Workshop on Scholarly Document Processing. Association for Computational Linguistics. (2020) 158-167. EMNLP|SDP 2020 https://www.aclweb.org/anthology/2020.sdp-1.17/
- Publication Type :
- Report
- Accession number :
- edsarx.2012.11740
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.18653/v1/2020.sdp-1.17