Back to Search Start Over

Attention Score is not All You Need for Token Importance Indicator in KV Cache Reduction: Value Also Matters

Authors :
Guo, Zhiyu
Kamigaito, Hidetaka
Watanabe, Taro
Publication Year :
2024

Abstract

Scaling the context size of large language models (LLMs) enables them to perform various new tasks, e.g., book summarization. However, the memory cost of the Key and Value (KV) cache in attention significantly limits the practical applications of LLMs. Recent works have explored token pruning for KV cache reduction in LLMs, relying solely on attention scores as a token importance indicator. However, our investigation into value vector norms revealed a notably non-uniform pattern questioning their reliance only on attention scores. Inspired by this, we propose a new method: Value-Aware Token Pruning (VATP) which uses both attention scores and the $ \ell_{1} $ norm of value vectors to evaluate token importance. Extensive experiments on LLaMA2-7B-chat and Vicuna-v1.5-7B across 16 LongBench tasks demonstrate that VATP outperforms attention-score-only baselines in over 12 tasks, confirming the effectiveness of incorporating value vector norms into token importance evaluation of LLMs.<br />Comment: Accepted at EMNLP 2024 (Main)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.12335
Document Type :
Working Paper