Back to Search Start Over

Med42 -- Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches

Authors :
Christophe, Clément
Kanithi, Praveen K
Munjal, Prateek
Raha, Tathagata
Hayat, Nasir
Rajan, Ronnie
Al-Mahrooqi, Ahmed
Gupta, Avani
Salman, Muhammad Umar
Gosal, Gurpreet
Kanakiya, Bhargav
Chen, Charles
Vassilieva, Natalia
Amor, Boulbaba Ben
Pimentel, Marco AF
Khan, Shadab
Publication Year :
2024

Abstract

This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies - full-parameter fine-tuning and parameter-efficient tuning - within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question-answering capabilities. Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks. Notably, our medical LLM Med42 showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs. Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications.<br />Comment: Published at AAAI 2024 Spring Symposium - Clinical Foundation Models

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.14779
Document Type :
Working Paper