Back to Search Start Over

Upcycling Large Language Models into Mixture of Experts

Authors :
He, Ethan
Khattar, Abhinav
Prenger, Ryan
Korthikanti, Vijay
Yan, Zijie
Liu, Tong
Fan, Shiqing
Aithal, Ashwath
Shoeybi, Mohammad
Catanzaro, Bryan
Publication Year :
2024

Abstract

Upcycling pre-trained dense language models into sparse mixture-of-experts (MoE) models is an efficient approach to increase the model capacity of already trained models. However, optimal techniques for upcycling at scale remain unclear. In this work, we conduct an extensive study of upcycling methods and hyperparameters for billion-parameter scale language models. We propose a novel "virtual group" initialization scheme and weight scaling approach to enable upcycling into fine-grained MoE architectures. Through ablations, we find that upcycling outperforms continued dense model training. In addition, we show that softmax-then-topK expert routing improves over topK-then-softmax approach and higher granularity MoEs can help improve accuracy. Finally, we upcycled Nemotron-4 15B on 1T tokens and compared it to a continuously trained version of the same model on the same 1T tokens: the continuous trained model achieved 65.3% MMLU, whereas the upcycled model achieved 67.6%. Our results offer insights and best practices to effectively leverage upcycling for building MoE language models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.07524
Document Type :
Working Paper