Back to Search Start Over

Assessing the quality of information on wikipedia: A deep‐learning approach.

Authors :
Wang, Ping
Li, Xiaodan
Source :
Journal of the Association for Information Science & Technology; Jan2020, Vol. 71 Issue 1, p16-28, 13p, 12 Charts, 6 Graphs
Publication Year :
2020

Abstract

Currently, web document repositories have been collaboratively created and edited. One of these repositories, Wikipedia, is facing an important problem: assessing the quality of Wikipedia. Existing approaches exploit techniques such as statistical models or machine leaning algorithms to assess Wikipedia article quality. However, existing models do not provide satisfactory results. Furthermore, these models fail to adopt a comprehensive feature framework. In this article, we conduct an extensive survey of previous studies and summarize a comprehensive feature framework, including text statistics, writing style, readability, article structure, network, and editing history. Selected state‐of‐the‐art deep‐learning models, including the convolutional neural network (CNN), deep neural network (DNN), long short‐term memory (LSTMs) network, CNN‐LSTMs, bidirectional LSTMs, and stacked LSTMs, are applied to assess the quality of Wikipedia. A detailed comparison of deep‐learning models is conducted with regard to different aspects: classification performance and training performance. We include an importance analysis of different features and feature sets to determine which features or feature sets are most effective in distinguishing Wikipedia article quality. This extensive experiment validates the effectiveness of the proposed model. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
23301635
Volume :
71
Issue :
1
Database :
Complementary Index
Journal :
Journal of the Association for Information Science & Technology
Publication Type :
Academic Journal
Accession number :
140089734
Full Text :
https://doi.org/10.1002/asi.24210