Back to Search Start Over

Position Paper: Toward New Frameworks for Studying Model Representations

Authors :
Golechha, Satvik
Dao, James
Publication Year :
2024

Abstract

Mechanistic interpretability (MI) aims to understand AI models by reverse-engineering the exact algorithms neural networks learn. Most works in MI so far have studied behaviors and capabilities that are trivial and token-aligned. However, most capabilities are not that trivial, which advocates for the study of hidden representations inside these networks as the unit of analysis. We do a literature review, formalize representations for features and behaviors, highlight their importance and evaluation, and perform some basic exploration in the mechanistic interpretability of representations. With discussion and exploratory results, we justify our position that studying representations is an important and under-studied field, and that currently established methods in MI are not sufficient to understand representations, thus pushing for the research community to work toward new frameworks for studying representations.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.03855
Document Type :
Working Paper