Back to Search Start Over

Checkpoint Merging via Bayesian Optimization in LLM Pretraining

Authors :
Liu, Deyuan
Wang, Zecheng
Wang, Bingning
Chen, Weipeng
Li, Chunshan
Tu, Zhiying
Chu, Dianhui
Li, Bo
Sui, Dianbo
Publication Year :
2024

Abstract

The rapid proliferation of large language models (LLMs) such as GPT-4 and Gemini underscores the intense demand for resources during their training processes, posing significant challenges due to substantial computational and environmental costs. To alleviate this issue, we propose checkpoint merging in pretraining LLM. This method utilizes LLM checkpoints with shared training trajectories, and is rooted in an extensive search space exploration for the best merging weight via Bayesian optimization. Through various experiments, we demonstrate that: (1) Our proposed methodology exhibits the capacity to augment pretraining, presenting an opportunity akin to obtaining substantial benefits at minimal cost; (2) Our proposed methodology, despite requiring a given held-out dataset, still demonstrates robust generalization capabilities across diverse domains, a pivotal aspect in pretraining.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.19390
Document Type :
Working Paper