Back to Search Start Over

Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)

Authors :
Choi, Jungwook
Chuang, Pierce I-Jen
Wang, Zhuo
Venkataramani, Swagath
Srinivasan, Vijayalakshmi
Gopalakrishnan, Kailash
Publication Year :
2018

Abstract

Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. In order to reduce this cost, several quantization schemes have gained attention recently with some focusing on weight quantization, and others focusing on quantizing activations. This paper proposes novel techniques that target weight and activation quantizations separately resulting in an overall quantized neural network (QNN). The activation quantization technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter $\alpha$ that is optimized during training to find the right quantization scale. The weight quantization scheme, statistics-aware weight binning (SAWB), finds the optimal scaling factor that minimizes the quantization error based on the statistical characteristics of the distribution of weights without the need for an exhaustive search. The combination of PACT and SAWB results in a 2-bit QNN that achieves state-of-the-art classification accuracy (comparable to full precision networks) across a range of popular models and datasets.<br />Comment: arXiv admin note: substantial text overlap with arXiv:1805.06085

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1807.06964
Document Type :
Working Paper