Paloma: A Benchmark for Evaluating Language Model Fit
CoRR(2023)
摘要
Language models (LMs) commonly report perplexity on monolithic data held out
from training. Implicitly or explicitly, this data is composed of
domains$\unicode{x2013}$varying distributions of language. Rather than assuming
perplexity on one distribution extrapolates to others, Perplexity Analysis for
Language Model Assessment (Paloma), measures LM fit to 585 text domains,
ranging from nytimes.com to r/depression on Reddit. We invite submissions to
our benchmark and organize results by comparability based on compliance with
guidelines such as removal of benchmark contamination from pretraining.
Submissions can also record parameter and training token count to make
comparisons of Pareto efficiency for performance as a function of these
measures of cost. We populate our benchmark with results from 6 baselines
pretrained on popular corpora. In case studies, we demonstrate analyses that
are possible with Paloma, such as finding that pretraining without data beyond
Common Crawl leads to inconsistent fit to many domains.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要