Back to Search Start Over

Towards Robust Universal Information Extraction: Benchmark, Evaluation, and Solution

Authors :
Zhu, Jizhao
Shi, Akang
Li, Zixuan
Bai, Long
Jin, Xiaolong
Guo, Jiafeng
Cheng, Xueqi
Publication Year :
2025

Abstract

In this paper, we aim to enhance the robustness of Universal Information Extraction (UIE) by introducing a new benchmark dataset, a comprehensive evaluation, and a feasible solution. Existing robust benchmark datasets have two key limitations: 1) They generate only a limited range of perturbations for a single Information Extraction (IE) task, which fails to evaluate the robustness of UIE models effectively; 2) They rely on small models or handcrafted rules to generate perturbations, often resulting in unnatural adversarial examples. Considering the powerful generation capabilities of Large Language Models (LLMs), we introduce a new benchmark dataset for Robust UIE, called RUIE-Bench, which utilizes LLMs to generate more diverse and realistic perturbations across different IE tasks. Based on this dataset, we comprehensively evaluate existing UIE models and reveal that both LLM-based models and other models suffer from significant performance drops. To improve robustness and reduce training costs, we propose a data-augmentation solution that dynamically selects hard samples for iterative training based on the model's inference loss. Experimental results show that training with only \textbf{15\%} of the data leads to an average \textbf{7.5\%} relative performance improvement across three IE tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2503.03201
Document Type :
Working Paper