Back to Search Start Over

[Call for Papers] The 2nd BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus

Authors :
Choshen, Leshem
Cotterell, Ryan
Hu, Michael Y.
Linzen, Tal
Mueller, Aaron
Ross, Candace
Warstadt, Alex
Wilcox, Ethan
Williams, Adina
Zhuang, Chengxu
Publication Year :
2024

Abstract

After last year's successful BabyLM Challenge, the competition will be hosted again in 2024/2025. The overarching goals of the challenge remain the same; however, some of the competition rules will be different. The big changes for this year's competition are as follows: First, we replace the loose track with a paper track, which allows (for example) non-model-based submissions, novel cognitively-inspired benchmarks, or analysis techniques. Second, we are relaxing the rules around pretraining data, and will now allow participants to construct their own datasets provided they stay within the 100M-word or 10M-word budget. Third, we introduce a multimodal vision-and-language track, and will release a corpus of 50% text-only and 50% image-text multimodal data as a starting point for LM model training. The purpose of this CfP is to provide rules for this year's challenge, explain these rule changes and their rationale in greater detail, give a timeline of this year's competition, and provide answers to frequently asked questions from last year's challenge.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.06214
Document Type :
Working Paper