Back to Search Start Over

DyVEDeep: Dynamic Variable Effort Deep Neural Networks.

Authors :
GANAPATHY, SANJAY
VENKATARAMANI, SWAGATH
SRIRAMAN, GIRIDHUR
RAVINDRAN, BALARAMAN
RAGHUNATHAN, ANAND
Source :
ACM Transactions on Embedded Computing Systems; Jun2020, Vol. 19 Issue 3, p1-24, 24p
Publication Year :
2020

Abstract

Deep Neural Networks (DNNs) have advanced the state-of-the-art in a variety of machine learning tasks and are deployed in increasing numbers of products and services. However, the computational requirements of training and evaluating large-scale DNNs are growing at a much faster pace than the capabilities of the underlying hardware platforms that they are executed upon. To address this challenge, one promising approach is to exploit the error resilient nature of DNNs by skipping or approximating computations that have negligible impact on classification accuracy. Almost all prior efforts in this direction propose static DNN approximations by either pruning network connections, implementing computations at lower precision, or compressingweights. In this work, we propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep) to reduce the computational requirements of DNNs during inference. Complementary to the aforementioned static approaches, DyVEDeep is a dynamic approach that exploits heterogeneity in the DNN inputs to improve their compute efficiency with comparable classification accuracy and without requiring any re-training. DyVEDeep equips DNNs with dynamic effort mechanisms that identify computations critical to classifying a given input and focus computational effort only on the critical computations, while skipping or approximating the rest. We propose three dynamic effort mechanisms that operate at different levels of granularity viz. neuron, feature, and layer levels. We build DyVEDeep versions of six popular image recognition benchmarks (CIFAR-10, AlexNet, OverFeat, VGG-16, SqueezeNet, and Deep-Compressed-AlexNet) within the Caffe deep-learning framework. We evaluate DyVEDeep on two platforms--a high-performance server with a 2.7 GHz Intel Xeon E5-2680 processor and 128 GB memory, and a low-power Raspberry Pi board with an ARM Cortex A53 processor and 1 GB memory. Across all benchmarks, DyVEDeep achieves 2.47×-5.15× reduction in the number of scalar operations, which translates to 1.94×-2.23× and 1.46×-3.46× performance improvement over well-optimized baselines on the Xeon server and the Raspberry Pi, respectively, with comparable classification accuracy. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15399087
Volume :
19
Issue :
3
Database :
Complementary Index
Journal :
ACM Transactions on Embedded Computing Systems
Publication Type :
Academic Journal
Accession number :
145358602
Full Text :
https://doi.org/10.1145/3372882