1. EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models
- Author
-
Hu, Junhao, Huang, Wenrui, Wang, Haoyi, Wang, Weidong, Hu, Tiancheng, Zhang, Qin, Feng, Hao, Chen, Xusheng, Shan, Yizhou, and Xie, Tao
- Subjects
Computer Science - Machine Learning ,Computer Science - Computation and Language ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computer Science - Performance - Abstract
Large Language Models (LLMs) are critical for a wide range of applications, but serving them efficiently becomes increasingly challenging as inputs become more complex. Context caching improves serving performance by exploiting inter-request dependency and reusing key-value (KV) cache across requests, thus improving time-to-first-token (TTFT). However, existing prefix-based context caching requires exact token prefix matches, limiting cache reuse in few-shot learning, multi-document QA, or retrieval-augmented generation, where prefixes may vary. In this paper, we present EPIC, an LLM serving system that introduces position-independent context caching (PIC), enabling modular KV cache reuse regardless of token chunk position (or prefix). EPIC features two key designs: AttnLink, which leverages static attention sparsity to minimize recomputation for accuracy recovery, and KVSplit, a customizable chunking method that preserves semantic coherence. Our experiments demonstrate that Epic delivers up to 8x improvements in TTFT and 7x throughput over existing systems, with negligible or no accuracy loss. By addressing the limitations of traditional caching approaches, Epic enables more scalable and efficient LLM inference.
- Published
- 2024