Back to Search Start Over

Probabilistic Medical Predictions of Large Language Models

Authors :
Gu, Bowen
Desai, Rishi J.
Lin, Kueiyu Joshua
Yang, Jie
Publication Year :
2024

Abstract

Large Language Models (LLMs) have demonstrated significant potential in clinical applications through prompt engineering, which enables the generation of flexible and diverse clinical predictions. However, they pose challenges in producing prediction probabilities, which are essential for transparency and allowing clinicians to apply flexible probability thresholds in decision-making. While explicit prompt instructions can lead LLMs to provide prediction probability numbers through text generation, LLMs' limitations in numerical reasoning raise concerns about the reliability of these text-generated probabilities. To assess this reliability, we compared explicit probabilities derived from text generation to implicit probabilities calculated based on the likelihood of predicting the correct label token. Experimenting with six advanced open-source LLMs across five medical datasets, we found that the performance of explicit probabilities was consistently lower than implicit probabilities with respect to discrimination, precision, and recall. Moreover, these differences were enlarged on small LLMs and imbalanced datasets, emphasizing the need for cautious interpretation and applications, as well as further research into robust probability estimation methods for LLMs in clinical contexts.<br />Comment: 58 pages, 3 figures, 3 tables, Submitted to Nature Communication

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.11316
Document Type :
Working Paper