Back to Search Start Over

Designing efficient accelerator of depthwise separable convolutional neural network on FPGA.

Authors :
Ding, Wei
Huang, Zeyu
Huang, Zunkai
Tian, Li
Wang, Hui
Feng, Songlin
Source :
Journal of Systems Architecture. Aug2019, Vol. 97, p278-286. 9p.
Publication Year :
2019

Abstract

In recent years, convolutional neural networks (CNNs) have achieved state-of-the-art results for many computer vision tasks. However, the traditional CNNs are computational-intensive and memory-intensive, hence they are unsuitable for the application in mobile edge computing scenarios with limited computing resources and low power consumption. The depthwise separable CNNs can significantly reduce the number of model parameters and improve the calculation speed, so it is naturally suitable for mobile edge computing applications. In this paper, we propose a Field Programmable Gate Array (FPGA)-based depthwise separable CNN accelerator with all the layers working concurrently in a pipelined fashion to improve the system throughput and performance. To implement the accelerator, we present a custom computing engine architecture to handle the dataflow between adjacent layers by using double-buffering-based memory channels. Besides, in fully connected layers, data titling technique is adopted to divide matrix multiplication from large dimension into small matrix. Finally, our proposed accelerator for depthwise separable CNN has been implemented and evaluated on Intel Arria 10 FPGA. The results of experiment indicate that the proposed depthwise separable CNN accelerator has a performance of 98.9 GOP/s and achieve up to 17.6× speed up and 29.4× low power than CPU and GPU implementations respectively. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13837621
Volume :
97
Database :
Academic Search Index
Journal :
Journal of Systems Architecture
Publication Type :
Academic Journal
Accession number :
137013742
Full Text :
https://doi.org/10.1016/j.sysarc.2018.12.008