Back to Search Start Over

QGen: On the Ability to Generalize in Quantization Aware Training

Authors :
AskariHemmat, MohammadHossein
Jeddi, Ahmadreza
Hemmat, Reyhane Askari
Lazarevich, Ivan
Hoffman, Alexander
Sah, Sudhakar
Saboori, Ehsan
Savaria, Yvon
David, Jean-Pierre
Publication Year :
2024

Abstract

Quantization lowers memory usage, computational requirements, and latency by utilizing fewer bits to represent model weights and activations. In this work, we investigate the generalization properties of quantized neural networks, a characteristic that has received little attention despite its implications on model performance. In particular, first, we develop a theoretical model for quantization in neural networks and demonstrate how quantization functions as a form of regularization. Second, motivated by recent work connecting the sharpness of the loss landscape and generalization, we derive an approximate bound for the generalization of quantized models conditioned on the amount of quantization noise. We then validate our hypothesis by experimenting with over 2000 models trained on CIFAR-10, CIFAR-100, and ImageNet datasets on convolutional and transformer-based models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.11769
Document Type :
Working Paper