Back to Search Start Over

MedCalc-Bench: Evaluating Large Language Models for Medical Calculations

Authors :
Khandekar, Nikhil
Jin, Qiao
Xiong, Guangzhi
Dunn, Soren
Applebaum, Serina S
Anwar, Zain
Sarfo-Gyamfi, Maame
Safranek, Conrad W
Anwar, Abid A
Zhang, Andrew
Gilson, Aidan
Singer, Maxwell B
Dave, Amisha
Taylor, Andrew
Zhang, Aidong
Chen, Qingyu
Lu, Zhiyong
Publication Year :
2024

Abstract

As opposed to evaluating computation and logic-based reasoning, current benchmarks for evaluating large language models (LLMs) in medicine are primarily focused on question-answering involving domain knowledge and descriptive reasoning. While such qualitative capabilities are vital to medical diagnosis, in real-world scenarios, doctors frequently use clinical calculators that follow quantitative equations and rule-based reasoning paradigms for evidence-based decision support. To this end, we propose MedCalc-Bench, a first-of-its-kind dataset focused on evaluating the medical calculation capability of LLMs. MedCalc-Bench contains an evaluation set of over 1000 manually reviewed instances from 55 different medical calculation tasks. Each instance in MedCalc-Bench consists of a patient note, a question requesting to compute a specific medical value, a ground truth answer, and a step-by-step explanation showing how the answer is obtained. While our evaluation results show the potential of LLMs in this area, none of them are effective enough for clinical settings. Common issues include extracting the incorrect entities, not using the correct equation or rules for a calculation task, or incorrectly performing the arithmetic for the computation. We hope our study highlights the quantitative knowledge and reasoning gaps in LLMs within medical settings, encouraging future improvements of LLMs for various clinical calculation tasks.<br />Comment: Github link: https://github.com/ncbi-nlp/MedCalc-Bench HuggingFace link: https://huggingface.co/datasets/nsk7153/MedCalc-Bench

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.12036
Document Type :
Working Paper