1. شناسایی و دسته بندی معیارهای ارزیابی برون دادهای علمی در زیست بوم نشر علمی.
- Author
-
افروز همراهی, رویا پورنقی, and داریوش مطلبی
- Abstract
Purpose: Science and technology are the most critical infrastructures for the country's progress and essential tools for competition in various fields. Evaluation is at the core of all scientific efforts, which has become increasingly important with the proliferation of scientific publications. Evaluation is not a simple and transparent process; it is considered a delicate activity. The presence of multiple evaluation indicators to assess the value of scientific outputs in texts, databases, and scientific centers or publications has prompted a study of these three sources (texts, scientific networks, and experts) to establish integrated criteria for evaluating outputs in the scholarly publication ecosystem. Some scientific outputs, such as lectures, workshops, and scientific meetings, are often overlooked due to the absence of an integrated framework in the scholarly publication ecosystem. Additionally, only a few specific quantitative aspects, such as the impact factor, the number of citations, or the number of uses, are typically considered in the evaluation of scientific works. Moreover, evaluations are usually confined to a short timeframe. Identifying and categorizing these indicators within a framework can positively impact addressing these issues and establishing a continuous evaluation process for both pre and post-publication of scientific works. Therefore, the current research aims to identify comprehensive evaluation criteria in the scholarly publication ecosystem by taking into account texts, scholarly publication networks, and the perspectives of scientific publication experts. Methodology: Results from all three methods indicate that there are three key indicators and nine sub-indicators crucial for evaluating research outcomes within the Scientific Publishing System. These indicators are predominantly categorized based on form, type, and format. The triangulation method has introduced a conceptual framework of evaluation indicators in the scholarly publication ecosystem. Initially, a systematic review was conducted to extract evaluation criteria from 331 sources. Subsequently, to validate the extracted criteria and finalize the initial framework, the identified criteria were scrutinized across 12 scientific databases, and ultimately, it was endorsed by 30 domestic and international scholarly publication experts. The purposive sampling method was employed in all three studies. Findings: Research shows that the ecosystem of scientific publication consists of various components, including experts, scientific centers, information media, subject areas, information, and knowledge systems, which require different indicators and methods. The data extracted from the systematic review in the field of evaluation were classified into three groups: form, type, and format. Evaluation forms include content, open, altmetric, and bibliographic evaluations (creative and source evaluation). However, some experts distinguish between bibliometric evaluation indicators and scientometric and informatics evaluation indicators, and most experts in different subject areas define all three categories as bibliographic evaluations. In the evaluation form, creators include individuals and scientific organizations such as universities. Open evaluation can refer to judging an output not just by a jury of experts but rather by a jury of anyone interested in the output. In other words, open evaluations are an ongoing post-publication process of transparent peer evaluation. Multiple paper evaluation functions freely defined by individuals or groups provide various perspectives on the scientific literature. Multiple paper evaluation functions alongside more diverse research evaluation criteria beyond traditional methods are emerging, and with these come a range of practical, ethical, and social factors to consider. Altmetric evaluations are a set of methods based on the social web that measure and monitor the reach and impact of scholarly output through online interactions. Simply, altmetrics are metrics beyond traditional citations. This evaluating form measures cite, like, view, reuse, discussion, bookmark, etc. The types of evaluations include quantitative, qualitative, and mixed evaluations. The form of evaluation also includes technical evaluation and non-technical evaluation (researcher-made evaluation, discussion-based evaluation). Technical evaluations are indicators that follow predefined procedures or repetitive processes to reach the result, while experts define non-technical evaluations according to specific situations and conditions. Conclusion: The results of all three methods indicate that there are three key indicators and nine sub-indicators crucial for evaluating scientific output within the scholarly publication ecosystem. These indicators are predominantly grouped based on form, type, and format. The findings demonstrate alignment among the three studies (systematic review, observation of scholarly publication networks, and survey of experts). However, each study has highlighted specific evaluation indicators. According to the systematic review, observation of scholarly publication networks, and expert opinions, the primary focus in evaluating the scholarly publication ecosystem is on the form and type of evaluation. There is a greater emphasis on common and well-established formats in the evaluation process. Furthermore, individual and organizational needs and objectives significantly influence the selection of evaluation indicators. Categorizing evaluation indicators will assist stakeholders in understanding the evaluation procedures of scholarly publication ecosystems and in choosing appropriate evaluation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF