Back to Search Start Over

MOHAQ: Multi-Objective Hardware-Aware Quantization of Recurrent Neural Networks

Authors :
Rezk, Nesma M.
Nordström, Tomas
Stathis, Dimitrios
Ul-Abdin, Zain
Aksoy, Eren Erdal
Hemani, Ahmed
Publication Year :
2021

Abstract

The compression of deep learning models is of fundamental importance in deploying such models to edge devices. The selection of compression parameters can be automated to meet changes in the hardware platform and application using optimization algorithms. This article introduces a Multi-Objective Hardware-Aware Quantization (MOHAQ) method, which considers hardware efficiency and inference error as objectives for mixed-precision quantization. The proposed method feasibly evaluates candidate solutions in a large search space by relying on two steps. First, post-training quantization is applied for fast solution evaluation (inference-only search). Second, we propose the "beacon-based search" to retrain selected solutions only and use them as beacons to know the effect of retraining on other solutions. We use a speech recognition model based on Simple Recurrent Unit (SRU) using the TIMIT dataset and apply our method to run on SiLago and Bitfusion platforms. We provide experimental evaluations showing that SRU can be compressed up to 8x by post-training quantization without any significant error increase. On SiLago, we found solutions that achieve 97\% and 86\% of the maximum possible speedup and energy saving, with a minor increase in error. On Bitfusion, beacon-based search reduced the error gain of inference-only search by up to 4.9 percentage points.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2108.01192
Document Type :
Working Paper