Back to Search Start Over

Fine Tuning Large Language Models for Medicine: The Role and Importance of Direct Preference Optimization

Authors :
Savage, Thomas
Ma, Stephen
Boukil, Abdessalem
Patel, Vishwesh
Rangan, Ekanath
Rodriguez, Ivan
Chen, Jonathan H
Publication Year :
2024

Abstract

Large Language Model (LLM) fine tuning is underutilized in the field of medicine. Two of the most common methods of fine tuning are Supervised Fine Tuning (SFT) and Direct Preference Optimization (DPO), but there is little guidance informing users when to use either technique. In this investigation, we compare the performance of SFT and DPO for five common natural language tasks in medicine: Classification with text data, Classification with numeric data, Clinical Reasoning, Summarization, and Clinical Triage. We find that SFT alone is sufficient for Classification with text data, whereas DPO improves performance for the more complex tasks of Clinical Reasoning, Summarization and Clinical Triage. Our results establish the role and importance of DPO fine tuning within medicine, and consequently call attention to current software gaps that prevent widespread deployment of this technique.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.12741
Document Type :
Working Paper