Back to Search Start Over

An Energy-Efficient Accelerator Architecture with Serial Accumulation Dataflow for Deep CNNs

Authors :
Ahmadi, Mehdi
Vakili, Shervin
Langlois, J. M. Pierre
Publication Year :
2020

Abstract

Convolutional Neural Networks (CNNs) have shown outstanding accuracy for many vision tasks during recent years. When deploying CNNs on portable devices and embedded systems, however, the large number of parameters and computations result in long processing time and low battery life. An important factor in designing CNN hardware accelerators is to efficiently map the convolution computation onto hardware resources. In addition, to save battery life and reduce energy consumption, it is essential to reduce the number of DRAM accesses since DRAM consumes orders of magnitude more energy compared to other operations in hardware. In this paper, we propose an energy-efficient architecture which maximally utilizes its computational units for convolution operations while requiring a low number of DRAM accesses. The implementation results show that the proposed architecture performs one image recognition task using the VGGNet model with a latency of 393 ms and only 251.5 MB of DRAM accesses.<br />Comment: 4 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2002.07711
Document Type :
Working Paper