Back to Search Start Over

PROXYQA: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models

Authors :
Tan, Haochen
Guo, Zhijiang
Shi, Zhan
Xu, Lu
Liu, Zhili
Feng, Yunlong
Li, Xiaoguang
Wang, Yasheng
Shang, Lifeng
Liu, Qun
Song, Linqi
Publication Year :
2024

Abstract

Large Language Models (LLMs) have succeeded remarkably in understanding long-form contents. However, exploring their capability for generating long-form contents, such as reports and articles, has been relatively unexplored and inadequately assessed by existing benchmarks. The prevalent evaluation methods, which predominantly rely on crowdsourcing, are recognized for their labor-intensive nature and lack of efficiency, whereas automated metrics, such as the ROUGE score, demonstrate discordance with human judgment criteria. In this paper, we propose ProxyQA, an innovative framework dedicated to assessing long-text generation. ProxyQA comprises in-depth human-curated meta-questions spanning various domains, each accompanied by specific proxy-questions with pre-annotated answers. LLMs are tasked to generate extensive content in response to these meta-questions, by engaging an evaluator and incorporating the generated texts as contextual background, ProxyQA assesses the generated content's quality through the evaluator's accuracy in addressing the proxy-questions. We examine multiple LLMs, emphasizing ProxyQA's demanding nature as a high-quality assessment tool. Human evaluation demonstrates that the proxy-question method is notably self-consistent and aligns closely with human evaluative standards. The dataset and leaderboard is available at \url{https://proxy-qa.com}.<br />Comment: Accepted to ACL 2024 main conference

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.15042
Document Type :
Working Paper