Back to Search Start Over

Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research

Authors :
Soldaini, Luca
Kinney, Rodney
Bhagia, Akshita
Schwenk, Dustin
Atkinson, David
Authur, Russell
Bogin, Ben
Chandu, Khyathi
Dumas, Jennifer
Elazar, Yanai
Hofmann, Valentin
Jha, Ananya Harsh
Kumar, Sachin
Lucy, Li
Lyu, Xinxi
Lambert, Nathan
Magnusson, Ian
Morrison, Jacob
Muennighoff, Niklas
Naik, Aakanksha
Nam, Crystal
Peters, Matthew E.
Ravichander, Abhilasha
Richardson, Kyle
Shen, Zejiang
Strubell, Emma
Subramani, Nishant
Tafjord, Oyvind
Walsh, Pete
Zettlemoyer, Luke
Smith, Noah A.
Hajishirzi, Hannaneh
Beltagy, Iz
Groeneveld, Dirk
Dodge, Jesse
Lo, Kyle
Publication Year :
2024

Abstract

Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often released without accompanying training data or recipes to reproduce them. As a result, it is challenging to conduct and advance scientific research on language modeling, such as understanding how training data impacts model capabilities and limitations. To facilitate scientific research on language model pretraining, we curate and release Dolma, a three-trillion-token English corpus, built from a diverse mixture of web content, scientific papers, code, public-domain books, social media, and encyclopedic materials. We extensively document Dolma, including its design principles, details about its construction, and a summary of its contents. We present analyses and experimental results on intermediate states of Dolma to share what we have learned about important data curation practices. Finally, we open-source our data curation toolkit to enable reproduction of our work as well as support further research in large-scale data curation.<br />Comment: Accepted at ACL 2024; Dataset: https://hf.co/datasets/allenai/dolma; Code: https://github.com/allenai/dolma

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.00159
Document Type :
Working Paper