Back to Search Start Over

nanoLM: an Affordable LLM Pre-training Benchmark via Accurate Loss Prediction across Scales

Authors :
Yao, Yiqun
fan, Siqi
Huang, Xiusheng
Fang, Xuezhi
Li, Xiang
Ni, Ziyi
Jiang, Xin
Meng, Xuying
Han, Peng
Shang, Shuo
Liu, Kang
Sun, Aixin
Wang, Yequan
Publication Year :
2023

Abstract

As language models scale up, it becomes increasingly expensive to verify research ideas because conclusions on small models do not trivially transfer to large ones. A possible solution is to establish a generic system that accurately predicts certain metrics for large models without training them. Existing scaling laws require hyperparameter search on the largest models, limiting their predicative capability. In this paper, we present an approach (namely {\mu}Scaling) to predict the pre-training loss, based on our observations that Maximal Update Parametrization ({\mu}P) enables accurate fitting of scaling laws close to common loss basins in hyperparameter space. With {\mu}Scaling, different model designs can be compared on large scales by training only their smaller counterparts. Further, we introduce nanoLM: an affordable LLM pre-training benchmark that facilitates this new research paradigm. With around 14% of the one-time pre-training cost, we can accurately forecast the loss for models up to 52B. Our goal with nanoLM is to empower researchers with limited resources to reach meaningful conclusions on large models. We also aspire for our benchmark to serve as a bridge between the academic community and the industry. Code for {\mu}Scaling is available at https://github.com/cofe-ai/Mu-scaling. Code for nanoLLM will be available later.<br />Comment: This is a modified and extended version of our previous Mu-scaling work released in April 2023 (see v1)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2304.06875
Document Type :
Working Paper