1. An evaluation of the performance of stopping rules in AI‐aided screening for psychological meta‐analytical research.
- Author
-
König, Lars, Zitzmann, Steffen, Fütterer, Tim, Campos, Diego G., Scherer, Ronny, and Hecht, Martin
- Subjects
MACHINE learning ,PSYCHOLOGICAL research ,RESEARCH personnel ,DILEMMA ,ALGORITHMS - Abstract
Several AI‐aided screening tools have emerged to tackle the ever‐expanding body of literature. These tools employ active learning, where algorithms sort abstracts based on human feedback. However, researchers using these tools face a crucial dilemma: When should they stop screening without knowing the proportion of relevant studies? Although numerous stopping rules have been proposed to guide users in this decision, they have yet to undergo comprehensive evaluation. In this study, we evaluated the performance of three stopping rules: the knee method, a data‐driven heuristic, and a prevalence estimation technique. We measured performance via sensitivity, specificity, and screening cost and explored the influence of the prevalence of relevant studies and the choice of the learning algorithm. We curated a dataset of abstract collections from meta‐analyses across five psychological research domains. Our findings revealed performance differences between stopping rules regarding all performance measures and variations in the performance of stopping rules across different prevalence ratios. Moreover, despite the relatively minor impact of the learning algorithm, we found that specific combinations of stopping rules and learning algorithms were most effective for certain prevalence ratios of relevant abstracts. Based on these results, we derived practical recommendations for users of AI‐aided screening tools. Furthermore, we discuss possible implications and offer suggestions for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF