Back to Search Start Over

The Devil is in the Details: On the Pitfalls of Event Extraction Evaluation

Authors :
Peng, Hao
Wang, Xiaozhi
Yao, Feng
Zeng, Kaisheng
Hou, Lei
Li, Juanzi
Liu, Zhiyuan
Shen, Weixing
Publication Year :
2023

Abstract

Event extraction (EE) is a crucial task aiming at extracting events from texts, which includes two subtasks: event detection (ED) and event argument extraction (EAE). In this paper, we check the reliability of EE evaluations and identify three major pitfalls: (1) The data preprocessing discrepancy makes the evaluation results on the same dataset not directly comparable, but the data preprocessing details are not widely noted and specified in papers. (2) The output space discrepancy of different model paradigms makes different-paradigm EE models lack grounds for comparison and also leads to unclear mapping issues between predictions and annotations. (3) The absence of pipeline evaluation of many EAE-only works makes them hard to be directly compared with EE works and may not well reflect the model performance in real-world pipeline scenarios. We demonstrate the significant influence of these pitfalls through comprehensive meta-analyses of recent papers and empirical experiments. To avoid these pitfalls, we suggest a series of remedies, including specifying data preprocessing, standardizing outputs, and providing pipeline evaluation results. To help implement these remedies, we develop a consistent evaluation framework OMNIEVENT, which can be obtained from https://github.com/THU-KEG/OmniEvent.<br />Comment: Accepted at Findings of ACL 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2306.06918
Document Type :
Working Paper