1. Use of a large language model with instruction‐tuning for reliable clinical frailty scoring.
- Author
-
Kee, Xiang Lee Jamie, Sng, Gerald Gui Ren, Lim, Daniel Yan Zheng, Tung, Joshua Yi Min, Abdullah, Hairil Rizal, and Chowdury, Anupama Roy
- Subjects
- *
LANGUAGE models , *ARTIFICIAL intelligence , *TEMPERATURE control , *ACTIVITIES of daily living , *FRAILTY - Abstract
Background Methods Results Conclusions Frailty is an important predictor of health outcomes, characterized by increased vulnerability due to physiological decline. The Clinical Frailty Scale (CFS) is commonly used for frailty assessment but may be influenced by rater bias. Use of artificial intelligence (AI), particularly Large Language Models (LLMs) offers a promising method for efficient and reliable frailty scoring.The study utilized seven standardized patient scenarios to evaluate the consistency and reliability of CFS scoring by OpenAI's GPT‐3.5‐turbo model. Two methods were tested: a basic prompt and an instruction‐tuned prompt incorporating CFS definition, a directive for accurate responses, and temperature control. The outputs were compared using the Mann–Whitney U test and Fleiss' Kappa for inter‐rater reliability. The outputs were compared with historic human scores of the same scenarios.The LLM's median scores were similar to human raters, with differences of no more than one point. Significant differences in score distributions were observed between the basic and instruction‐tuned prompts in five out of seven scenarios. The instruction‐tuned prompt showed high inter‐rater reliability (Fleiss' Kappa of 0.887) and produced consistent responses in all scenarios. Difficulty in scoring was noted in scenarios with less explicit information on activities of daily living (ADLs).This study demonstrates the potential of LLMs in consistently scoring clinical frailty with high reliability. It demonstrates that prompt engineering via instruction‐tuning can be a simple but effective approach for optimizing LLMs in healthcare applications. The LLM may overestimate frailty scores when less information about ADLs is provided, possibly as it is less subject to implicit assumptions and extrapolation than humans. Future research could explore the integration of LLMs in clinical research and frailty‐related outcome prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF