Back to Search Start Over

Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking

Authors :
Prakash, Nikhil
Shaham, Tamar Rott
Haklay, Tal
Belinkov, Yonatan
Bau, David
Publication Year :
2024

Abstract

Fine-tuning on generalized tasks such as instruction following, code generation, and mathematics has been shown to enhance language models' performance on a range of tasks. Nevertheless, explanations of how such fine-tuning influences the internal computations in these models remain elusive. We study how fine-tuning affects the internal mechanisms implemented in language models. As a case study, we explore the property of entity tracking, a crucial facet of language comprehension, where models fine-tuned on mathematics have substantial performance gains. We identify the mechanism that enables entity tracking and show that (i) in both the original model and its fine-tuned versions primarily the same circuit implements entity tracking. In fact, the entity tracking circuit of the original model on the fine-tuned versions performs better than the full original model. (ii) The circuits of all the models implement roughly the same functionality: Entity tracking is performed by tracking the position of the correct entity in both the original model and its fine-tuned versions. (iii) Performance boost in the fine-tuned models is primarily attributed to its improved ability to handle the augmented positional information. To uncover these findings, we employ: Patch Patching, DCM, which automatically detects model components responsible for specific semantics, and CMAP, a new approach for patching activations across models to reveal improved mechanisms. Our findings suggest that fine-tuning enhances, rather than fundamentally alters, the mechanistic operation of the model.<br />Comment: ICLR 2024. 26 pages, 13 figures. Code and data at https://finetuning.baulab.info/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.14811
Document Type :
Working Paper