Back to Search Start Over

CSR:Achieving 1 Bit Key-Value Cache via Sparse Representation

Authors :
Zhang, Hongxuan
Zhao, Yao
Zheng, Jiaqi
Zhuang, Chenyi
Gu, Jinjie
Chen, Guihai
Publication Year :
2024

Abstract

The emergence of long-context text applications utilizing large language models (LLMs) has presented significant scalability challenges, particularly in memory footprint. The linear growth of the Key-Value (KV) cache responsible for storing attention keys and values to minimize redundant computations can lead to substantial increases in memory consumption, potentially causing models to fail to serve with limited memory resources. To address this issue, we propose a novel approach called Cache Sparse Representation (CSR), which converts the KV cache by transforming the dense Key-Value cache tensor into sparse indexes and weights, offering a more memory-efficient representation during LLM inference. Furthermore, we introduce NeuralDict, a novel neural network-based method for automatically generating the dictionary used in our sparse representation. Our extensive experiments demonstrate that CSR achieves performance comparable to state-of-the-art KV cache quantization algorithms while maintaining robust functionality in memory-constrained environments.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.11741
Document Type :
Working Paper