1. The Local Interaction Basis: Identifying Computationally-Relevant and Sparsely Interacting Features in Neural Networks
- Author
-
Bushnaq, Lucius, Heimersheim, Stefan, Goldowsky-Dill, Nicholas, Braun, Dan, Mendel, Jake, Hänni, Kaarel, Griffin, Avery, Stöhler, Jörn, Wache, Magdalena, Hobbhahn, Marius, Bushnaq, Lucius, Heimersheim, Stefan, Goldowsky-Dill, Nicholas, Braun, Dan, Mendel, Jake, Hänni, Kaarel, Griffin, Avery, Stöhler, Jörn, Wache, Magdalena, and Hobbhahn, Marius
- Abstract
Mechanistic interpretability aims to understand the behavior of neural networks by reverse-engineering their internal computations. However, current methods struggle to find clear interpretations of neural network activations because a decomposition of activations into computational features is missing. Individual neurons or model components do not cleanly correspond to distinct features or functions. We present a novel interpretability method that aims to overcome this limitation by transforming the activations of the network into a new basis - the Local Interaction Basis (LIB). LIB aims to identify computational features by removing irrelevant activations and interactions. Our method drops irrelevant activation directions and aligns the basis with the singular vectors of the Jacobian matrix between adjacent layers. It also scales features based on their importance for downstream computation, producing an interaction graph that shows all computationally-relevant features and interactions in a model. We evaluate the effectiveness of LIB on modular addition and CIFAR-10 models, finding that it identifies more computationally-relevant features that interact more sparsely, compared to principal component analysis. However, LIB does not yield substantial improvements in interpretability or interaction sparsity when applied to language models. We conclude that LIB is a promising theory-driven approach for analyzing neural networks, but in its current form is not applicable to large language models.
- Published
- 2024