NICE: To Optimize In-Context Examples or Not?
CoRR(2024)
摘要
Recent works have shown that large language models (LLMs) work remarkably
well on a wide range of tasks through in-context learning and optimization of
in-context examples (ICE). However, most of these studies assume either a fixed
or no instruction provided in the prompt, leading to the apparent consensus
that the optimization of in-context examples is critical for better
performance. We challenge this consensus for instruction-tuned LLMs by
investigating the necessity of optimizing in-context examples when
task-specific instructions are provided, and find that there are tasks for
which various ways of optimizing in-context examples yield diminishing returns.
We introduce a task-specific metric called () that
quantifies the learnability of tasks from a given instruction, and provides a
heuristic that helps decide whether to optimize for instructions or ICE for any
new task. On a wide range of tasks and a systematically created instruction set
with gradually added details, we validate our hypothesis empirically by
computing with query-dependent bins of examples, comparing different
instructions with ICE selection methods, and performing label perturbation
experiments. We conclude that tasks can be divided into two broad classes based
on the metric, where the returns on ICE optimization follow predictable
trends when instructions are provided in the prompt.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要