1. A framework for human evaluation of large language models in healthcare derived from literature review.
- Author
-
Tam, Thomas Yu Chow, Sivarajkumar, Sonish, Kapoor, Sumit, Stolyar, Alisa V., Polanska, Katelyn, McCarthy, Karleigh R., Osterhoudt, Hunter, Wu, Xizhi, Visweswaran, Shyam, Fu, Sunyang, Mathur, Piyush, Cacciamani, Giovanni E., Sun, Cong, Peng, Yifan, and Wang, Yanshan
- Subjects
GENERATIVE artificial intelligence ,MEDICAL protocols ,ALLIED health education ,PATIENT education ,SCALE analysis (Psychology) ,MEDICAL information storage & retrieval systems ,DECISION support systems ,MEDICAL specialties & specialists ,RESEARCH funding ,PLANNING techniques ,PATIENT safety ,MANAGEMENT information systems ,MEDICAL care ,RESEARCH evaluation ,EVALUATION of organizational effectiveness ,NATURAL language processing ,CONFIDENCE ,PATIENT care ,DECISION making in clinical medicine ,HOSPITAL emergency services ,SYSTEMATIC reviews ,MEDLINE ,INFORMATION needs ,CONCEPTUAL structures ,LITERATURE reviews ,TRUST ,EMPLOYEE recruitment ,ONLINE information services ,STAKEHOLDER analysis ,MANAGEMENT of medical records ,RELIABILITY (Personality trait) ,EVALUATION - Abstract
With generative artificial intelligence (GenAI), particularly large language models (LLMs), continuing to make inroads in healthcare, assessing LLMs with human evaluations is essential to assuring safety and effectiveness. This study reviews existing literature on human evaluation methodologies for LLMs in healthcare across various medical specialties and addresses factors such as evaluation dimensions, sample types and sizes, selection, and recruitment of evaluators, frameworks and metrics, evaluation process, and statistical analysis type. Our literature review of 142 studies shows gaps in reliability, generalizability, and applicability of current human evaluation practices. To overcome such significant obstacles to healthcare LLM developments and deployments, we propose QUEST, a comprehensive and practical framework for human evaluation of LLMs covering three phases of workflow: Planning, Implementation and Adjudication, and Scoring and Review. QUEST is designed with five proposed evaluation principles: Quality of Information, Understanding and Reasoning, Expression Style and Persona, Safety and Harm, and Trust and Confidence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF