1. Content-based quality evaluation of scientific papers using coarse feature and knowledge entity network.
- Author
-
Wang, Zhongyi, Zhang, Haoxuan, Chen, Haihua, Feng, Yunhe, and Ding, Junhua
- Subjects
MACHINE learning ,SCIENCE education ,COMPUTER science ,PEER pressure ,RANDOM forest algorithms - Abstract
Pre-evaluating scientific paper quality aids in alleviating peer review pressure and fostering scientific advancement. Although prior studies have identified numerous quality-related features, their effectiveness and representativeness of paper content remain to be comprehensively investigated. Addressing this issue, we propose a content-based interpretable method for pre-evaluating the quality of scientific papers. Firstly, we define quality attributes of computer science (CS) papers as integrity , clarity , novelty , and significance , based on peer review criteria from 11 top-tier CS conferences. We formulate the problem as two classification tasks: Accepted/Disputed/Rejected (ADR) and Accepted/Rejected (AR). Subsequently, we construct fine-grained features from metadata and knowledge entity networks, including text structure, readability, references, citations, semantic novelty, and network structure. We empirically evaluate our method using the ICLR paper dataset, achieving optimal performance with the Random Forest model, yielding F1 scores of 0.715 and 0.762 for the two tasks, respectively. Through feature analysis and case studies employing SHAP interpretable methods, we demonstrate that the proposed features enhance the performance of machine learning models in scientific paper quality evaluation, offering interpretable evidence for model decisions. • Define four criteria for quality evaluation of scientific papers: integrity, clarity, novelty, and significance. • Propose a framework for quality evaluation of scientific papers based on coarse features and knowledge entity network. • An effective algorithm for measuring the novelty and significance of scientific papers based on knowledge entity networks. • Create and release a rigorous dataset, which could serve as the gold standard for quality evaluation of scientific papers. • Conduct extensive experiments to validate the effectiveness of the proposed framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF