Back to Search Start Over

Lossless KV Cache Compression to 2%

Authors :
Yang, Zhen
Han, J. N.
Wu, Kan
Xie, Ruobing
Wang, An
Sun, Xingwu
Kang, Zhanhui
Publication Year :
2024

Abstract

Large language models have revolutionized data processing in numerous domains, with their ability to handle extended context reasoning receiving notable recognition. To speed up inference, maintaining a key-value (KV) cache memory is essential. Nonetheless, the growing demands for KV cache memory create significant hurdles for efficient implementation. This work introduces a novel architecture, Cross-Layer Latent Attention (CLLA), aimed at compressing the KV cache to less than 2% of its original size while maintaining comparable performance levels. CLLA integrates multiple aspects of KV cache compression, including attention head/dimension reduction, layer sharing, and quantization techniques, into a cohesive framework. Our extensive experiments demonstrate that CLLA achieves lossless performance on most tasks while utilizing minimal KV cache, marking a significant advancement in practical KV cache compression.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.15252
Document Type :
Working Paper