Are Emergent Abilities in Large Language Models just In-Context Learning?
arxiv(2023)
摘要
Large language models, comprising billions of parameters and pre-trained on
extensive web-scale corpora, have been claimed to acquire certain capabilities
without having been specifically trained on them. These capabilities, referred
to as "emergent abilities," have been a driving force in discussions regarding
the potentials and risks of language models. A key challenge in evaluating
emergent abilities is that they are confounded by model competencies that arise
through alternative prompting techniques, including in-context learning, which
is the ability of models to complete a task based on a few examples. We present
a novel theory that explains emergent abilities, taking into account their
potential confounding factors, and rigorously substantiate this theory through
over 1000 experiments. Our findings suggest that purported emergent abilities
are not truly emergent, but result from a combination of in-context learning,
model memory, and linguistic knowledge. Our work is a foundational step in
explaining language model performance, providing a template for their efficient
use and clarifying the paradox of their ability to excel in some instances
while faltering in others. Thus, we demonstrate that their capabilities should
not be overestimated.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要