1. Quantized convolutional neural networks through the lens of partial differential equations
- Author
-
Ben-Yair, Ido, Ben Shalom, Gil, Eliasof, Moshe, and Treister, Eran
- Abstract
Quantization of convolutional neural networks (CNNs) is a common approach to ease the computational burden involved in the deployment of CNNs, especially on low-resource edge devices. However, fixed-point arithmetic is not natural to the type of computations involved in neural networks. In this work, we explore ways to improve quantized CNNs using PDE-based perspective and analysis. First, we harness the total variation (TV) approach to apply edge-aware smoothing to the feature maps throughout the network. This aims to reduce outliers in the distribution of values and promote piecewise constant maps, which are more suitable for quantization. Secondly, we consider symmetric and stable variants of common CNNs for image classification and graph convolutional networks for graph node classification. We demonstrate through several experiments that the property of forward stability preserves the action of a network under different quantization rates. As a result, stable quantized networks behave similarly to their non-quantized counterparts even though they rely on fewer parameters. We also find that at times, stability even aids in improving accuracy. These properties are of particular interest for sensitive, resource-constrained, low-power or real-time applications like autonomous driving.
- Published
- 2022
- Full Text
- View/download PDF