Back to Search Start Over

MedGPTEval: A Dataset and Benchmark to Evaluate Responses of Large Language Models in Medicine

Authors :
Xu, Jie
Lu, Lu
Yang, Sen
Liang, Bilin
Peng, Xinwei
Pang, Jiali
Ding, Jinru
Shi, Xiaoming
Yang, Lingrui
Song, Huan
Li, Kang
Sun, Xin
Zhang, Shaoting
Publication Year :
2023

Abstract

METHODS: First, a set of evaluation criteria is designed based on a comprehensive literature review. Second, existing candidate criteria are optimized for using a Delphi method by five experts in medicine and engineering. Third, three clinical experts design a set of medical datasets to interact with LLMs. Finally, benchmarking experiments are conducted on the datasets. The responses generated by chatbots based on LLMs are recorded for blind evaluations by five licensed medical experts. RESULTS: The obtained evaluation criteria cover medical professional capabilities, social comprehensive capabilities, contextual capabilities, and computational robustness, with sixteen detailed indicators. The medical datasets include twenty-seven medical dialogues and seven case reports in Chinese. Three chatbots are evaluated, ChatGPT by OpenAI, ERNIE Bot by Baidu Inc., and Doctor PuJiang (Dr. PJ) by Shanghai Artificial Intelligence Laboratory. Experimental results show that Dr. PJ outperforms ChatGPT and ERNIE Bot in both multiple-turn medical dialogue and case report scenarios.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.07340
Document Type :
Working Paper