Interpreting Context Look-ups in Transformers: Investigating Attention-MLP Interactions
CoRR(2024)
摘要
In this paper, we investigate the interplay between attention heads and
specialized "next-token" neurons in the Multilayer Perceptron that predict
specific tokens. By prompting an LLM like GPT-4 to explain these model
internals, we can elucidate attention mechanisms that activate certain
next-token neurons. Our analysis identifies attention heads that recognize
contexts relevant to predicting a particular token, activating the associated
neuron through the residual connection. We focus specifically on heads in
earlier layers consistently activating the same next-token neuron across
similar prompts. Exploring these differential activation patterns reveals that
heads that specialize for distinct linguistic contexts are tied to generating
certain tokens. Overall, our method combines neural explanations and probing
isolated components to illuminate how attention enables context-dependent,
specialized processing in LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要