Locating and Editing Factual Associations in Mamba
arxiv(2024)
摘要
We investigate the mechanisms of factual recall in the Mamba state space
model. Our work is inspired by previous findings in autoregressive transformer
language models suggesting that their knowledge recall is localized to
particular modules at specific token locations; we therefore ask whether
factual recall in Mamba can be similarly localized. To investigate this, we
conduct four lines of experiments on Mamba. First, we apply causal tracing or
interchange interventions to localize key components inside Mamba that are
responsible for recalling facts, revealing that specific components within
middle layers show strong causal effects at the last token of the subject,
while the causal effect of intervening on later layers is most pronounced at
the last token of the prompt, matching previous findings on autoregressive
transformers. Second, we show that rank-one model editing methods can
successfully insert facts at specific locations, again resembling findings on
transformer models. Third, we examine the linearity of Mamba's representations
of factual relations. Finally we adapt attention-knockout techniques to Mamba
to dissect information flow during factual recall. We compare Mamba directly to
a similar-sized transformer and conclude that despite significant differences
in architectural approach, when it comes to factual recall, the two
architectures share many similarities.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要