Back to Search Start Over

Optimization of General Matrix Multiply Library for Ternary Weight for Fast DNN Inference.

Authors :
Choi, Seokhyeon
Shim, Kyuhong
Choi, Jungwook
Sung, Wonyong
Shim, Byonghyo
Source :
Journal of Signal Processing Systems for Signal, Image & Video Technology; Oct2022, Vol. 94 Issue 10, p929-943, 15p
Publication Year :
2022

Abstract

Efficient implementation of deep neural networks (DNNs) on CPU-based systems is critical owing to the proliferation of applications in embedded and Internet of Things systems. Nowdays, most CPUs are equipped with single instruction multiple data (SIMD) instructions, which are used to implement an efficient general matrix multiply (GEMM) library for accelerating DNN inference. Quantized neural networks are actively investigated to simplify DNN computation and memory requirements; however, the current CPU libraries do not efficiently support arithmetic operations below eight bits. Hence, we developed TernGEMM, a GEMM library composed of SIMD instructions for DNNs with ternary weights and sub-8-bit activations. TernGEMM is implemented using simple logical operations that replace the long-latency multiply–add operation. Instead of fixing the accumulation bit precision as 32-bit, TernGEMM accumulates the partial sums in a bit-incremental manner to exploit parallelism in 8-bit and 16-bit SIMD instructions. Furthermore, we propose different tile sizes for TernGEMM to better support the diverse dimensions of DNNs. Compared with a state-of–the-art reduced precision DNN GEMM library, i.e., GEMMLowp, TernGEMM achieve × 1.785 to × 4.147 speedup for ResNet50, MobileNet-V2, and EfficientNet-B0, as evaluated on both Intel and ARM CPUs. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19398018
Volume :
94
Issue :
10
Database :
Complementary Index
Journal :
Journal of Signal Processing Systems for Signal, Image & Video Technology
Publication Type :
Academic Journal
Accession number :
158999595
Full Text :
https://doi.org/10.1007/s11265-022-01782-3