Back to Search Start Over

Comparing Criteria Development Across Domain Experts, Lay Users, and Models in Large Language Model Evaluation

Authors :
Szymanski, Annalisa
Gebreegziabher, Simret Araya
Anuyah, Oghenemaro
Metoyer, Ronald A.
Li, Toby Jia-Jun
Publication Year :
2024

Abstract

Large Language Models (LLMs) are increasingly utilized for domain-specific tasks, yet integrating domain expertise into evaluating their outputs remains challenging. A common approach to evaluating LLMs is to use metrics, or criteria, which are assertions used to assess performance that help ensure that their outputs align with domain-specific standards. Previous efforts have involved developers, lay users, or the LLMs themselves in creating these criteria, however, evaluation particularly from a domain expertise perspective, remains understudied. This study explores how domain experts contribute to LLM evaluation by comparing their criteria with those generated by LLMs and lay users. We further investigate how the criteria-setting process evolves, analyzing changes between a priori and a posteriori stages. Our findings emphasize the importance of involving domain experts early in the evaluation process while utilizing complementary strengths of lay users and LLMs. We suggest implications for designing workflows that leverage these strengths at different evaluation stages.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.02054
Document Type :
Working Paper