Back to Search
Start Over
Beyond Single Concept Vector: Modeling Concept Subspace in LLMs with Gaussian Distribution
- Publication Year :
- 2024
-
Abstract
- Probing learned concepts in large language models (LLMs) is crucial for understanding how semantic knowledge is encoded internally. Training linear classifiers on probing tasks is a principle approach to denote the vector of a certain concept in the representation space. However, the single vector identified for a concept varies with both data and training, making it less robust and weakening its effectiveness in real-world applications. To address this challenge, we propose an approach to approximate the subspace representing a specific concept. Built on linear probing classifiers, we extend the concept vectors into Gaussian Concept Subspace (GCS). We demonstrate GCS's effectiveness through measuring its faithfulness and plausibility across multiple LLMs with different sizes and architectures. Additionally, we use representation intervention tasks to showcase its efficacy in real-world applications such as emotion steering. Experimental results indicate that GCS concept vectors have the potential to balance steering performance and maintaining the fluency in natural language generation tasks.<br />Comment: 28 pages, 9 figures
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2410.00153
- Document Type :
- Working Paper