Back to Search Start Over

Accelerating Large Language Model Pretraining via LFR Pedagogy: Learn, Focus, and Review

Authors :
Prakriya, Neha
Yen, Jui-Nan
Hsieh, Cho-Jui
Cong, Jason
Publication Year :
2024

Abstract

Large Language Model (LLM) pretraining traditionally relies on autoregressive language modeling on randomly sampled data blocks from web-scale datasets. We take inspiration from human learning techniques like spaced repetition to hypothesize that random data sampling for LLMs leads to high training cost and low quality models which tend to forget data. In order to effectively commit web-scale information to long-term memory, we propose the LFR (Learn, Focus, and Review) pedagogy, a new dynamic training paradigm which focuses and repeatedly reviews complex data blocks at systematic intervals based on the model's learning pace and progress. LFR records the model perplexities for different data blocks and frequently revisits blocks with higher perplexity which are more likely to be forgotten. We pretrain the GPT-2 models (124M - 1.5B) from scratch on the OpenWebText dataset using LFR. We test on downstream tasks from the language modeling, question answering, translation, and problem solving domains to achieve consistently lower perplexity and higher accuracy than the baseline OpenAI models, while obtaining a 20x pretraining speed-up.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.06131
Document Type :
Working Paper