Concept-aware Data Construction Improves In-context Learning of Language Models
arxiv(2024)
摘要
Many recent language models (LMs) are capable of in-context learning (ICL),
manifested in the LMs' ability to perform a new task solely from
natural-language instruction. Previous work curating in-context learners
assumes that ICL emerges from a vast over-parametrization or the scale of
multi-task training. However, recent theoretical work attributes the ICL
ability to concept-dependent training data and creates functional in-context
learners even in small-scale, synthetic settings.
In this work, we practically explore this newly identified axis of ICL
quality. We propose Concept-aware Training (CoAT), a framework for constructing
training scenarios that make it beneficial for the LM to learn to utilize the
analogical reasoning concepts from demonstrations. We find that by using CoAT,
pre-trained transformers can learn to better utilise new latent concepts from
demonstrations and that such ability makes ICL more robust to the functional
deficiencies of the previous models. Finally, we show that concept-aware
in-context learning is more effective for a majority of new tasks when compared
to traditional instruction tuning, resulting in a performance comparable to the
previous in-context learners using magnitudes of more training data.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要