Back to Search Start Over

Eager pruning

Authors :
Mingcong Song
Tao Li
Jiaqi Zhang
Xiangru Chen
Source :
ISCA
Publication Year :
2019
Publisher :
ACM, 2019.

Abstract

Today's big and fast data and the changing circumstance require fast training of Deep Neural Networks (DNN) in various applications. However, training a DNN with tons of parameters involves intensive computation. Enlightened by the fact that redundancy exists in DNNs and the observation that the ranking of the significance of the weights changes slightly during training, we propose Eager Pruning, which speeds up DNN training by moving pruning to an early stage. Eager Pruning is supported by an algorithm and architecture co-design. The proposed algorithm dictates the architecture to identify and prune insignificant weights during training without accuracy loss. A novel architecture is designed to transform the reduced training computation into performance improvement. Our proposed Eager Pruning system gains an average of 1.91x speedup over state-of-the-art hardware accelerator and 6.31x energy-efficiency over Nvidia GPUs.

Details

Database :
OpenAIRE
Journal :
Proceedings of the 46th International Symposium on Computer Architecture
Accession number :
edsair.doi...........d53e474c351953c524b75b5590554b11
Full Text :
https://doi.org/10.1145/3307650.3322263