A Survey on In-context Learning
arxiv(2022)
Abstract
With the increasing capabilities of large language models (LLMs), in-context
learning (ICL) has emerged as a new paradigm for natural language processing
(NLP), where LLMs make predictions based on contexts augmented with a few
examples. It has been a significant trend to explore ICL to evaluate and
extrapolate the ability of LLMs. In this paper, we aim to survey and summarize
the progress and challenges of ICL. We first present a formal definition of ICL
and clarify its correlation to related studies. Then, we organize and discuss
advanced techniques, including training strategies, prompt designing
strategies, and related analysis. Additionally, we explore various ICL
application scenarios, such as data engineering and knowledge updating.
Finally, we address the challenges of ICL and suggest potential directions for
further research. We hope that our work can encourage more research on
uncovering how ICL works and improving ICL.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined