Back to Search Start Over

Diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine

Authors :
Thomas Savage
Ashwin Nayak
Robert Gallo
Ekanath Rangan
Jonathan H. Chen
Source :
npj Digital Medicine, Vol 7, Iss 1, Pp 1-7 (2024)
Publication Year :
2024
Publisher :
Nature Portfolio, 2024.

Abstract

Abstract One of the major barriers to using large language models (LLMs) in medicine is the perception they use uninterpretable methods to make clinical decisions that are inherently different from the cognitive processes of clinicians. In this manuscript we develop diagnostic reasoning prompts to study whether LLMs can imitate clinical reasoning while accurately forming a diagnosis. We find that GPT-4 can be prompted to mimic the common clinical reasoning processes of clinicians without sacrificing diagnostic accuracy. This is significant because an LLM that can imitate clinical reasoning to provide an interpretable rationale offers physicians a means to evaluate whether an LLMs response is likely correct and can be trusted for patient care. Prompting methods that use diagnostic reasoning have the potential to mitigate the “black box” limitations of LLMs, bringing them one step closer to safe and effective use in medicine.

Details

Language :
English
ISSN :
23986352
Volume :
7
Issue :
1
Database :
Directory of Open Access Journals
Journal :
npj Digital Medicine
Publication Type :
Academic Journal
Accession number :
edsdoj.949bf0bf6eff4ff1851bbf287b8ad038
Document Type :
article
Full Text :
https://doi.org/10.1038/s41746-024-01010-1