1. Structuring medication signeturs as a language regression task: comparison of zero- and few-shot GPT with fine-tuned models.
- Author
-
Garcia-Agundez, Augusto, Kay, Julia, Li, Jing, Gianfrancesco, Milena, Rai, Baljeet, Hu, Angela, Schmajuk, Gabriela, and Yazdany, Jinoos
- Subjects
immunomodulating drugs ,in-context learning ,language regression ,large language models ,natural language processing - Abstract
IMPORTANCE: Electronic health record textual sources such as medication signeturs (sigs) contain valuable information that is not always available in structured form. Commonly processed through manual annotation, this repetitive and time-consuming task could be fully automated using large language models (LLMs). While most sigs include simple instructions, some include complex patterns. OBJECTIVES: We aimed to compare the performance of GPT-3.5 and GPT-4 with smaller fine-tuned models (ClinicalBERT, BlueBERT) in extracting the average daily dose of 2 immunomodulating medications with frequent complex sigs: hydroxychloroquine, and prednisone. METHODS: Using manually annotated sigs as the gold standard, we compared the performance of these models in 702 hydroxychloroquine and 22 104 prednisone prescriptions. RESULTS: GPT-4 vastly outperformed all other models for this task at any level of in-context learning. With 100 in-context examples, the model correctly annotates 94% of hydroxychloroquine and 95% of prednisone sigs to within 1 significant digit. Error analysis conducted by 2 additional manual annotators on annotator-model disagreements suggests that the vast majority of disagreements are model errors. Many model errors relate to ambiguous sigs on which there was also frequent annotator disagreement. DISCUSSION: Paired with minimal manual annotation, GPT-4 achieved excellent performance for language regression of complex medication sigs and vastly outperforms GPT-3.5, ClinicalBERT, and BlueBERT. However, the number of in-context examples needed to reach maximum performance was similar to GPT-3.5. CONCLUSION: LLMs show great potential to rapidly extract structured data from sigs in no-code fashion for clinical and research applications.
- Published
- 2024