Back to Search
Start Over
OpenMedLM: prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language models.
- Source :
- Scientific Reports; 6/19/2024, Vol. 14 Issue 1, p1-12, 12p
- Publication Year :
- 2024
-
Abstract
- LLMs can accomplish specialized medical knowledge tasks, however, equitable access is hindered by the extensive fine-tuning, specialized medical data requirement, and limited access to proprietary models. Open-source (OS) medical LLMs show performance improvements and provide the transparency and compliance required in healthcare. We present OpenMedLM, a prompting platform delivering state-of-the-art (SOTA) performance for OS LLMs on medical benchmarks. We evaluated OS foundation LLMs (7B-70B) on medical benchmarks (MedQA, MedMCQA, PubMedQA, MMLU medical-subset) and selected Yi34B for developing OpenMedLM. Prompting strategies included zero-shot, few-shot, chain-of-thought, and ensemble/self-consistency voting. OpenMedLM delivered OS SOTA results on three medical LLM benchmarks, surpassing previous best-performing OS models that leveraged costly and extensive fine-tuning. OpenMedLM displays the first results to date demonstrating the ability of OS foundation models to optimize performance, absent specialized fine-tuning. The model achieved 72.6% accuracy on MedQA, outperforming the previous SOTA by 2.4%, and 81.7% accuracy on MMLU medical-subset, establishing itself as the first OS LLM to surpass 80% accuracy on this benchmark. Our results highlight medical-specific emergent properties in OS LLMs not documented elsewhere to date and validate the ability of OS models to accomplish healthcare tasks, highlighting the benefits of prompt engineering to improve performance of accessible LLMs for medical applications. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 20452322
- Volume :
- 14
- Issue :
- 1
- Database :
- Complementary Index
- Journal :
- Scientific Reports
- Publication Type :
- Academic Journal
- Accession number :
- 177993736
- Full Text :
- https://doi.org/10.1038/s41598-024-64827-6