Back to Search Start Over

DEM: Distribution Edited Model for Training with Mixed Data Distributions

Authors :
Ram, Dhananjay
Rawal, Aditya
Hardalov, Momchil
Pappas, Nikolaos
Zha, Sheng
Publication Year :
2024

Abstract

Training with mixed data distributions is a common and important part of creating multi-task and instruction-following models. The diversity of the data distributions and cost of joint training makes the optimization procedure extremely challenging. Data mixing methods partially address this problem, albeit having a sub-optimal performance across data sources and require multiple expensive training runs. In this paper, we propose a simple and efficient alternative for better optimization of the data sources by combining models individually trained on each data source with the base model using basic element-wise vector operations. The resulting model, namely Distribution Edited Model (DEM), is 11x cheaper than standard data mixing and outperforms strong baselines on a variety of benchmarks, yielding upto 6.2% improvement on MMLU, 11.5% on BBH, 16.1% on DROP, 6% on MathQA, and 9.3% on HELM with models of size 3B to 13B. Notably, DEM does not require full re-training when modifying a single data-source, thus making it very flexible and scalable for training with diverse data sources.<br />Comment: Accepted to EMNLP 2024 (Main Conference)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.15570
Document Type :
Working Paper