Back to Search Start Over

Unveiling Language Competence Neurons: A Psycholinguistic Approach to Model Interpretability

Authors :
Duan, Xufeng
Zhou, Xinyu
Xiao, Bei
Cai, Zhenguang G.
Publication Year :
2024

Abstract

As large language models (LLMs) become advance in their linguistic capacity, understanding how they capture aspects of language competence remains a significant challenge. This study therefore employs psycholinguistic paradigms, which are well-suited for probing deeper cognitive aspects of language processing, to explore neuron-level representations in language model across three tasks: sound-shape association, sound-gender association, and implicit causality. Our findings indicate that while GPT-2-XL struggles with the sound-shape task, it demonstrates human-like abilities in both sound-gender association and implicit causality. Targeted neuron ablation and activation manipulation reveal a crucial relationship: when GPT-2-XL displays a linguistic ability, specific neurons correspond to that competence; conversely, the absence of such an ability indicates a lack of specialized neurons. This study is the first to utilize psycholinguistic experiments to investigate deep language competence at the neuron level, providing a new level of granularity in model interpretability and insights into the internal mechanisms driving language ability in transformer based LLMs.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.15827
Document Type :
Working Paper