Back to Search Start Over

ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models

Authors :
Elangovan, Aparna
Liu, Ling
Xu, Lei
Bodapati, Sravan
Roth, Dan
Publication Year :
2024

Abstract

In this position paper, we argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking that draws upon insights from disciplines such as user experience research and human behavioral psychology to ensure that the experimental design and results are reliable. The conclusions from these evaluations, thus, must consider factors such as usability, aesthetics, and cognitive biases. We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert. Furthermore, the evaluation should differentiate the capabilities and weaknesses of increasingly powerful large language models -- which requires effective test sets. The scalability of human evaluation is also crucial to wider adoption. Hence, to design an effective human evaluation system in the age of generative NLP, we propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars -- Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.<br />Comment: Accepted in ACL 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.18638
Document Type :
Working Paper
Full Text :
https://doi.org/10.18653/v1/2024.acl-long.63