Back to Search Start Over

Transformers Use Causal World Models in Maze-Solving Tasks

Authors :
Spies, Alex F.
Edwards, William
Ivanitskiy, Michael I.
Skapars, Adrians
Räuker, Tilman
Inoue, Katsumi
Russo, Alessandra
Shanahan, Murray
Publication Year :
2024

Abstract

Recent studies in interpretability have explored the inner workings of transformer models trained on tasks across various domains, often discovering that these networks naturally develop surprisingly structured representations. When such representations comprehensively reflect the task domain's structure, they are commonly referred to as ``World Models'' (WMs). In this work, we discover such WMs in transformers trained on maze tasks. In particular, by employing Sparse Autoencoders (SAEs) and analysing attention patterns, we examine the construction of WMs and demonstrate consistency between the circuit analysis and the SAE feature-based analysis. We intervene upon the isolated features to confirm their causal role and, in doing so, find asymmetries between certain types of interventions. Surprisingly, we find that models are able to reason with respect to a greater number of active features than they see during training, even if attempting to specify these in the input token sequence would lead the model to fail. Futhermore, we observe that varying positional encodings can alter how WMs are encoded in a model's residual stream. By analyzing the causal role of these WMs in a toy domain we hope to make progress toward an understanding of emergent structure in the representations acquired by Transformers, leading to the development of more interpretable and controllable AI systems.<br />Comment: Main paper: 9 pages, 9 figures. Supplementary material: 10 pages, 17 additional figures. Code and data will be available upon publication. Corresponding author: A. F. Spies (afspies@imperial.ac.uk)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.11867
Document Type :
Working Paper