Back to Search Start Over

Enhancing the Utilization of Processing Elements in Spatial Deep Neural Network Accelerators.

Authors :
Asadikouhanjani, Mohammadreza
Ko, Seok-Bum
Source :
IEEE Transactions on Computer-Aided Design of Integrated Circuits & Systems; Sep2021, Vol. 40 Issue 9, p1947-1951, 5p
Publication Year :
2021

Abstract

Equipping mobile platforms with deep learning applications is very valuable. Providing healthcare services in remote areas, improving privacy, and lowering needed communication bandwidth are the advantages of such platforms. Designing an efficient computation engine enhances the performance of these platforms while running deep neural networks (DNNs). Energy-efficient DNN accelerators use skipping sparsity and early negative output feature detection to prune the computations. Spatial DNN accelerators in principle can support computation-pruning techniques compared to other common architectures, such as systolic arrays. These accelerators need a separate data distribution fabric like buses or trees with support for high bandwidth to run the mentioned techniques efficiently and avoid network on chip (NoC)-based stalls. Spatial designs suffer from divergence and unequal work distribution. Therefore, applying computation-pruning techniques into a spatial design, which is even equipped with an NoC that supports high bandwidth for the processing elements (PEs), still causes stalls inside the computation engine. In a spatial architecture, the PEs that perform their tasks earlier have a slack time compared to others. In this article, we propose an architecture with a negligible area overhead based on sharing the scratchpads in a novel way between the PEs to use the available slack time caused by applying computation-pruning techniques or the used NoC format. With the use of our dataflow, a spatial engine can benefit from computation-pruning and data reuse techniques more efficiently. When compared to the reference design, our proposed method achieves a speedup of $\times 1.24$ and an energy efficiency of $\times 1.18$ per inference. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02780070
Volume :
40
Issue :
9
Database :
Complementary Index
Journal :
IEEE Transactions on Computer-Aided Design of Integrated Circuits & Systems
Publication Type :
Academic Journal
Accession number :
153187959
Full Text :
https://doi.org/10.1109/TCAD.2020.3031240