Back to Search Start Over

Accelerating Deep Neural Network Computation on a Low Power Reconfigurable Architecture

Authors :
Chaitali Chakrabarti
Hun-Seok Kim
Ronald G. Dreslinski
Yan Xiong
Trevor Mudge
Jian Zhou
David Blaauw
Subhankar Pal
Source :
ISCAS, Scopus-Elsevier
Publication Year :
2020
Publisher :
IEEE, 2020.

Abstract

Recent work on neural network architectures has focused on bridging the gap between performance/efficiency and programmability. We consider implementations of three popular neural networks, ResNet, AlexNet and ASGD weight-dropped Recurrent Neural Network (AWD RNN) on a low power programmable architecture, Transformer. The architecture consists of light-weight cores interconnected by caches and crossbars that support run-time reconfiguration between shared and private cache mode operations. We present efficient implementations of key neural network kernels and evaluate the performance of each kernel when operating in different cache modes. The best-performing cache modes are then used in the implementation of the end-to-end network. Simulation results show superior performance with ResNet, AlexNet and AWD RNN achieving 188.19 GOPS/W, 150.53 GOPS/W and 120.68 GOPS/W, respectively, in the 14 nm technology node.

Details

Database :
OpenAIRE
Journal :
2020 IEEE International Symposium on Circuits and Systems (ISCAS)
Accession number :
edsair.doi.dedup.....c9887f9a4d7daf719c7ed3ae5b2d7c19
Full Text :
https://doi.org/10.1109/iscas45731.2020.9180871