Back to Search Start Over

Exploring Information Processing in Large Language Models: Insights from Information Bottleneck Theory

Authors :
Yang, Zhou
Qi, Zhengyu
Ren, Zhaochun
Jia, Zhikai
Sun, Haizhou
Zhu, Xiaofei
Liao, Xiangwen
Publication Year :
2025

Abstract

Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of tasks by understanding input information and predicting corresponding outputs. However, the internal mechanisms by which LLMs comprehend input and make effective predictions remain poorly understood. In this paper, we explore the working mechanism of LLMs in information processing from the perspective of Information Bottleneck Theory. We propose a non-training construction strategy to define a task space and identify the following key findings: (1) LLMs compress input information into specific task spaces (e.g., sentiment space, topic space) to facilitate task understanding; (2) they then extract and utilize relevant information from the task space at critical moments to generate accurate predictions. Based on these insights, we introduce two novel approaches: an Information Compression-based Context Learning (IC-ICL) and a Task-Space-guided Fine-Tuning (TS-FT). IC-ICL enhances reasoning performance and inference efficiency by compressing retrieved example information into the task space. TS-FT employs a space-guided loss to fine-tune LLMs, encouraging the learning of more effective compression and selection mechanisms. Experiments across multiple datasets validate the effectiveness of task space construction. Additionally, IC-ICL not only improves performance but also accelerates inference speed by over 40\%, while TS-FT achieves superior results with a minimal strategy adjustment.<br />Comment: 9 pages, 9 figures, 3 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.00999
Document Type :
Working Paper