Universal Neurons in GPT2 Language Models
CoRR(2024)
摘要
A basic question within the emerging field of mechanistic interpretability is
the degree to which neural networks learn the same underlying mechanisms. In
other words, are neural mechanisms universal across different models? In this
work, we study the universality of individual neurons across GPT2 models
trained from different initial random seeds, motivated by the hypothesis that
universal neurons are likely to be interpretable. In particular, we compute
pairwise correlations of neuron activations over 100 million tokens for every
neuron pair across five different seeds and find that 1-5% of neurons are
universal, that is, pairs of neurons which consistently activate on the same
inputs. We then study these universal neurons in detail, finding that they
usually have clear interpretations and taxonomize them into a small number of
neuron families. We conclude by studying patterns in neuron weights to
establish several universal functional roles of neurons in simple circuits:
deactivating attention heads, changing the entropy of the next token
distribution, and predicting the next token to (not) be within a particular
set.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要