Back to Search Start Over

Hyper-Compression: Model Compression via Hyperfunction

Authors :
Fan, Fenglei
Fan, Juntong
Wang, Dayang
Zhang, Jingbo
Dong, Zelin
Zhang, Shijun
Wang, Ge
Zeng, Tieyong
Publication Year :
2024

Abstract

The rapid growth of large models' size has far outpaced that of GPU memory. To bridge this gap, inspired by the succinct relationship between genotype and phenotype, we turn the model compression problem into the issue of parameter representation to propose the so-called hyper-compression. The hyper-compression uses a hyperfunction to represent the parameters of the target network, and notably, here the hyperfunction is designed per ergodic theory that relates to a problem: if a low-dimensional dynamic system can fill the high-dimensional space eventually. Empirically, the proposed hyper-compression enjoys the following merits: 1) \textbf{P}referable compression ratio; 2) \textbf{N}o post-hoc retraining; 3) \textbf{A}ffordable inference time; and 4) \textbf{S}hort compression time. It compresses LLaMA2-7B in an hour and achieves close-to-int4-quantization performance, without retraining and with a performance drop of less than 1\%. Our work has the potential to invigorate the field of model compression, towards a harmony between the scaling law and the stagnation of hardware upgradation.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.00592
Document Type :
Working Paper