A Survey on Data Selection for Language Models
CoRR(2024)
摘要
A major factor in the recent success of large language models is the use of
enormous and ever-growing text datasets for unsupervised pre-training. However,
naively training a model on all available data may not be optimal (or
feasible), as the quality of available text data can vary. Filtering out data
can also decrease the carbon footprint and financial costs of training models
by reducing the amount of training required.
Data selection methods aim to determine which candidate data points to
include in the training dataset and how to appropriately sample from the
selected data points. The promise of improved data selection methods has caused
the volume of research in the area to rapidly expand. However, because deep
learning is mostly driven by empirical evidence and experimentation on
large-scale data is expensive, few organizations have the resources for
extensive data selection research. Consequently, knowledge of effective data
selection practices has become concentrated within a few organizations, many of
which do not openly share their findings and methodologies.
To narrow this gap in knowledge, we present a comprehensive review of
existing literature on data selection methods and related research areas,
providing a taxonomy of existing approaches. By describing the current
landscape of research, this work aims to accelerate progress in data selection
by establishing an entry point for new and established researchers.
Additionally, throughout this review we draw attention to noticeable holes in
the literature and conclude the paper by proposing promising avenues for future
research.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要