Linear Explanations for Individual Neurons
arxiv(2024)
摘要
In recent years many methods have been developed to understand the internal
workings of neural networks, often by describing the function of individual
neurons in the model. However, these methods typically only focus on explaining
the very highest activations of a neuron. In this paper we show this is not
sufficient, and that the highest activation range is only responsible for a
very small percentage of the neuron's causal effect. In addition, inputs
causing lower activations are often very different and can't be reliably
predicted by only looking at high activations. We propose that neurons should
instead be understood as a linear combination of concepts, and develop an
efficient method for producing these linear explanations. In addition, we show
how to automatically evaluate description quality using simulation, i.e.
predicting neuron activations on unseen inputs in vision setting.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要