Back to Search Start Over

Iteratively Training Look-Up Tables for Network Quantization

Authors :
Cardinaux, Fabien
Uhlich, Stefan
Yoshiyama, Kazuki
Garc��a, Javier Alonso
Tiedemann, Stephen
Kemp, Thomas
Nakamura, Akira
Publication Year :
2018

Abstract

Operating deep neural networks on devices with limited resources requires the reduction of their memory footprints and computational requirements. In this paper we introduce a training method, called look-up table quantization, LUT-Q, which learns a dictionary and assigns each weight to one of the dictionary's values. We show that this method is very flexible and that many other techniques can be seen as special cases of LUT-Q. For example, we can constrain the dictionary trained with LUT-Q to generate networks with pruned weight matrices or restrict the dictionary to powers-of-two to avoid the need for multiplications. In order to obtain fully multiplier-less networks, we also introduce a multiplier-less version of batch normalization. Extensive experiments on image recognition and object detection tasks show that LUT-Q consistently achieves better performance than other methods with the same quantization bitwidth.<br />NIPS 2018 workshop on Compact Deep Neural Networks with industrial applications

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....3e242ea768c14420dc91fd6d5b2905e7