Back to Search Start Over

If You Can't Use Them, Recycle Them: Optimizing Merging at Scale Mitigates Performance Tradeoffs

Authors :
Khalifa, Muhammad
Tan, Yi-Chern
Ahmadian, Arash
Hosking, Tom
Lee, Honglak
Wang, Lu
Üstün, Ahmet
Sherborne, Tom
Gallé, Matthias
Publication Year :
2024

Abstract

Model merging has shown great promise at combining expert models, but the benefit of merging is unclear when merging ``generalist'' models trained on many tasks. We explore merging in the context of large (~100B) models, by recycling checkpoints that exhibit tradeoffs among different tasks. Such checkpoints are often created in the process of developing a frontier model, and many suboptimal ones are usually discarded. Given a pool of model checkpoints obtained from different training runs (e.g., different stages, objectives, hyperparameters, and data mixtures), which naturally show tradeoffs across different language capabilities (e.g., instruction following vs. code generation), we investigate whether merging can recycle such suboptimal models into a Pareto-optimal one. Our optimization algorithm tunes the weight of each checkpoint in a linear combination, resulting in a Pareto-optimal models that outperforms both individual models and merge-based baselines. Further analysis shows that good merges tend to include almost all checkpoints with non-zero weights, indicating that even seemingly bad initial checkpoints can contribute to good final merges.<br />Comment: 13 pages, 9 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.04144
Document Type :
Working Paper