Scale Alone Does not Improve Mechanistic Interpretability in Vision Models
arxiv(2023)
Abstract
In light of the recent widespread adoption of AI systems, understanding the
internal information processing of neural networks has become increasingly
critical. Most recently, machine vision has seen remarkable progress by scaling
neural networks to unprecedented levels in dataset and model size. We here ask
whether this extraordinary increase in scale also positively impacts the field
of mechanistic interpretability. In other words, has our understanding of the
inner workings of scaled neural networks improved as well? We use a
psychophysical paradigm to quantify one form of mechanistic interpretability
for a diverse suite of nine models and find no scaling effect for
interpretability - neither for model nor dataset size. Specifically, none of
the investigated state-of-the-art models are easier to interpret than the
GoogLeNet model from almost a decade ago. Latest-generation vision models
appear even less interpretable than older architectures, hinting at a
regression rather than improvement, with modern models sacrificing
interpretability for accuracy. These results highlight the need for models
explicitly designed to be mechanistically interpretable and the need for more
helpful interpretability methods to increase our understanding of networks at
an atomic level. We release a dataset containing more than 130'000 human
responses from our psychophysical evaluation of 767 units across nine models.
This dataset facilitates research on automated instead of human-based
interpretability evaluations, which can ultimately be leveraged to directly
optimize the mechanistic interpretability of models.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined