1. Hybrid Student-Teacher Large Language Model Refinement for Cancer Toxicity Symptom Extraction
- Author
-
Khanmohammadi, Reza, Ghanem, Ahmed I., Verdecchia, Kyle, Hall, Ryan, Elshaikh, Mohamed, Movsas, Benjamin, Bagher-Ebadian, Hassan, Luo, Bing, Chetty, Indrin J., Alhanai, Tuka, Thind, Kundan, and Ghassemi, Mohammad M.
- Subjects
Computer Science - Computation and Language ,Computer Science - Information Retrieval - Abstract
Large Language Models (LLMs) offer significant potential for clinical symptom extraction, but their deployment in healthcare settings is constrained by privacy concerns, computational limitations, and operational costs. This study investigates the optimization of compact LLMs for cancer toxicity symptom extraction using a novel iterative refinement approach. We employ a student-teacher architecture, utilizing Zephyr-7b-beta and Phi3-mini-128 as student models and GPT-4o as the teacher, to dynamically select between prompt refinement, Retrieval-Augmented Generation (RAG), and fine-tuning strategies. Our experiments on 294 clinical notes covering 12 post-radiotherapy toxicity symptoms demonstrate the effectiveness of this approach. The RAG method proved most efficient, improving average accuracy scores from 0.32 to 0.73 for Zephyr-7b-beta and from 0.40 to 0.87 for Phi3-mini-128 during refinement. In the test set, both models showed an approximate 0.20 increase in accuracy across symptoms. Notably, this improvement was achieved at a cost 45 times lower than GPT-4o for Zephyr and 79 times lower for Phi-3. These results highlight the potential of iterative refinement techniques in enhancing the capabilities of compact LLMs for clinical applications, offering a balance between performance, cost-effectiveness, and privacy preservation in healthcare settings.
- Published
- 2024