Back to Search Start Over

Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities

Authors :
Yang, Enneng
Shen, Li
Guo, Guibing
Wang, Xingwei
Cao, Xiaochun
Zhang, Jie
Tao, Dacheng
Publication Year :
2024

Abstract

Model merging is an efficient empowerment technique in the machine learning community that does not require the collection of raw training data and does not require expensive computation. As model merging becomes increasingly prevalent across various fields, it is crucial to understand the available model merging techniques comprehensively. However, there is a significant gap in the literature regarding a systematic and thorough review of these techniques. This survey provides a comprehensive overview of model merging methods and theories, their applications in various domains and settings, and future research directions. Specifically, we first propose a new taxonomic approach that exhaustively discusses existing model merging methods. Secondly, we discuss the application of model merging techniques in large language models, multimodal large language models, and 10+ machine learning subfields, including continual learning, multi-task learning, few-shot learning, etc. Finally, we highlight the remaining challenges of model merging and discuss future research directions. A comprehensive list of papers about model merging is available at \url{https://github.com/EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications}.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.07666
Document Type :
Working Paper