[Call for Papers] The 2nd BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus
arxiv(2024)
Abstract
After last year's successful BabyLM Challenge, the competition will be hosted
again in 2024/2025. The overarching goals of the challenge remain the same;
however, some of the competition rules will be different. The big changes for
this year's competition are as follows: First, we replace the loose track with
a paper track, which allows (for example) non-model-based submissions, novel
cognitively-inspired benchmarks, or analysis techniques. Second, we are
relaxing the rules around pretraining data, and will now allow participants to
construct their own datasets provided they stay within the 100M-word or
10M-word budget. Third, we introduce a multimodal vision-and-language track,
and will release a corpus of 50
as a starting point for LM model training. The purpose of this CfP is to
provide rules for this year's challenge, explain these rule changes and their
rationale in greater detail, give a timeline of this year's competition, and
provide answers to frequently asked questions from last year's challenge.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined