Back to Search Start Over

Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference

Authors :
Kamoda, Go
Heinzerling, Benjamin
Inaba, Tatsuro
Kudo, Keito
Sakaguchi, Keisuke
Inui, Kentaro
Publication Year :
2025

Abstract

According to the stages-of-inference hypothesis, early layers of language models map their subword-tokenized input, which does not necessarily correspond to a linguistically meaningful segmentation, to more meaningful representations that form the model's "inner vocabulary". Prior analysis of this detokenization stage has predominantly relied on probing and interventions such as path patching, which involve selecting particular inputs, choosing a subset of components that will be patched, and then observing changes in model behavior. Here, we show that several important aspects of the detokenization stage can be understood purely by analyzing model weights, without performing any model inference steps. Specifically, we introduce an analytical decomposition of first-layer attention in GPT-2. Our decomposition yields interpretable terms that quantify the relative contributions of position-related, token-related, and mixed effects. By focusing on terms in this decomposition, we discover weight-based explanations of attention bias toward close tokens and attention for detokenization.<br />Comment: 22 pages, 14 figures, to appear in NAACL Findings 2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.15754
Document Type :
Working Paper