Back to Search Start Over

Benchmarking Complex Instruction-Following with Multiple Constraints Composition

Authors :
Wen, Bosi
Ke, Pei
Gu, Xiaotao
Wu, Lindong
Huang, Hao
Zhou, Jinfeng
Li, Wenchuang
Hu, Binxin
Gao, Wendy
Xu, Jiaxin
Liu, Yiming
Tang, Jie
Wang, Hongning
Huang, Minlie
Publication Year :
2024

Abstract

Instruction following is one of the fundamental capabilities of large language models (LLMs). As the ability of LLMs is constantly improving, they have been increasingly applied to deal with complex human instructions in real-world scenarios. Therefore, how to evaluate the ability of complex instruction-following of LLMs has become a critical research problem. Existing benchmarks mainly focus on modeling different types of constraints in human instructions while neglecting the composition of different constraints, which is an indispensable constituent in complex instructions. To this end, we propose ComplexBench, a benchmark for comprehensively evaluating the ability of LLMs to follow complex instructions composed of multiple constraints. We propose a hierarchical taxonomy for complex instructions, including 4 constraint types, 19 constraint dimensions, and 4 composition types, and manually collect a high-quality dataset accordingly. To make the evaluation reliable, we augment LLM-based evaluators with rules to effectively verify whether generated texts can satisfy each constraint and composition. Furthermore, we obtain the final evaluation score based on the dependency structure determined by different composition types. ComplexBench identifies significant deficiencies in existing LLMs when dealing with complex instructions with multiple constraints composition.<br />Comment: 20 pages, 7 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.03978
Document Type :
Working Paper