Back to Search Start Over

A Case of On-Chip Memory Subsystem Design for Low-Power CNN Accelerators.

Authors :
Wang, Ying
Li, Huawei
Li, Xiaowei
Source :
IEEE Transactions on Computer-Aided Design of Integrated Circuits & Systems; Oct2018, Vol. 37 Issue 10, p1971-1984, 14p
Publication Year :
2018

Abstract

The rapid development of machine learning is enabling a plenty of novel applications, such as image and speech recognition for embedded and mobile devices. However, state-of-the-art deep learning models like convolutional neural networks (CNNs) are demanding so much on-chip storage and compute resources that they cannot be smoothly handled by low-power mobile or embedded systems. In order to fit large CNN models into mobile or more cutting-edge devices for IoT or cyberphysics applications, we proposed an efficient on-chip memory architecture for CNN inference acceleration, and showed its application to in-house single-instruction multiple-data structure machine learning processor. The redesigned on-chip memory subsystem, Memsqueezer, includes an active weight buffer and data buffer set that embraces specialized compression methods to reduce the footprint of CNN parameters (weights) and activation data, respectively. Memsqueezer buffer can compress the data and weight set according to the dataflow in computation, and it also includes a built-in redundancy detection mechanism that actively scans through the working-set of CNNs to boost their inference performance by eliminating the computation redundancy in CNN models. In our experiments, it is shown that the CNN processors with Memsqueezer buffers achieve more than $2{\times }$ performance improvement and reduces 85% energy consumption on average over the conventional buffer design with the same area budget. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02780070
Volume :
37
Issue :
10
Database :
Complementary Index
Journal :
IEEE Transactions on Computer-Aided Design of Integrated Circuits & Systems
Publication Type :
Academic Journal
Accession number :
131880713
Full Text :
https://doi.org/10.1109/TCAD.2017.2778060