Back to Search Start Over

Sparq: A Custom RISC-V Vector Processor for Efficient Sub-Byte Quantized Inference

Authors :
Dupuis, Théo
Fournier, Yoan
AskariHemmat, MohammadHossein
Zarif, Nizar El
Leduc-Primeau, François
David, Jean Pierre
Savaria, Yvon
Publication Year :
2023

Abstract

Convolutional Neural Networks (CNNs) are used in a wide range of applications, with full-precision CNNs achieving high accuracy at the expense of portability. Recent progress in quantization techniques has demonstrated that sub-byte Quantized Neural Networks (QNNs) achieve comparable or superior accuracy while significantly reducing the computational cost and memory footprint. However, sub-byte computation on commodity hardware is sub-optimal due to the lack of support for such precision. In this paper, we introduce Sparq, a Sub-byte vector Processor designed for the AcceleRation of QNN inference. This processor is based on a modified version of Ara, an open-source 64-bit RISC-V ``V'' compliant processor. Sparq is implemented in GLOBAL FOUNDRIES 22FDX FD-SOI technology and extends the Instruction Set Architecture (ISA) by adding a new multiply-shift-accumulate instruction to improve sub-byte computation effciency. The floating-point unit is also removed to minimize area and power usage. To demonstrate Sparq performance, we implement an ultra-low-precision (1-bit to 4-bit) vectorized conv2d operation taking advantage of the dedicated hardware. We show that Sparq can significantly accelerate sub-byte computations with respectively 3.2 times, and 1.7 times acceleration over an optimized 16-bit 2D convolution for 2-bit and 4-bit quantization.<br />Comment: 5 pages, Accepted for publication in the 21st IEEE Interregional NEWCAS Conference (2023)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2306.09905
Document Type :
Working Paper