Identifying Semantic Induction Heads to Understand In-Context Learning
CoRR(2024)
摘要
Although large language models (LLMs) have demonstrated remarkable
performance, the lack of transparency in their inference logic raises concerns
about their trustworthiness. To gain a better understanding of LLMs, we conduct
a detailed analysis of the operations of attention heads and aim to better
understand the in-context learning of LLMs. Specifically, we investigate
whether attention heads encode two types of relationships between tokens
present in natural languages: the syntactic dependency parsed from sentences
and the relation within knowledge graphs. We find that certain attention heads
exhibit a pattern where, when attending to head tokens, they recall tail tokens
and increase the output logits of those tail tokens. More crucially, the
formulation of such semantic induction heads has a close correlation with the
emergence of the in-context learning ability of language models. The study of
semantic attention heads advances our understanding of the intricate operations
of attention heads in transformers, and further provides new insights into the
in-context learning of LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要