Back to Search Start Over

BabyLlama-2: Ensemble-Distilled Models Consistently Outperform Teachers With Limited Data

Authors :
Tastet, Jean-Loup
Timiryasov, Inar
Publication Year :
2024

Abstract

We present BabyLlama-2, a 345 million parameter model distillation-pretrained from two teachers on a 10 million word corpus for the BabyLM competition. On BLiMP and SuperGLUE benchmarks, BabyLlama-2 outperforms baselines trained on both 10 and 100 million word datasets with the same data mix, as well as its teacher models. Through an extensive hyperparameter sweep, we demonstrate that the advantages of distillation cannot be attributed to suboptimal hyperparameter selection of the teachers. Our findings underscore the need for further investigation into distillation techniques, particularly in data-limited settings.<br />Comment: 9 pages, 3 figures, 5 tables, submitted to the BabyLM Challenge (CoNLL 2024 Shared Task)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.17312
Document Type :
Working Paper