Back to Search Start Over

A Survey on Deep Neural Network Pruning-Taxonomy, Comparison, Analysis, and Recommendations

Authors :
Cheng, Hongrong
Zhang, Miao
Shi, Javen Qinfeng
Cheng, Hongrong
Zhang, Miao
Shi, Javen Qinfeng
Publication Year :
2023

Abstract

Modern deep neural networks, particularly recent large language models, come with massive model sizes that require significant computational and storage resources. To enable the deployment of modern models on resource-constrained environments and accelerate inference time, researchers have increasingly explored pruning techniques as a popular research direction in neural network compression. However, there is a dearth of up-to-date comprehensive review papers on pruning. To address this issue, in this survey, we provide a comprehensive review of existing research works on deep neural network pruning in a taxonomy of 1) universal/specific speedup, 2) when to prune, 3) how to prune, and 4) fusion of pruning and other compression techniques. We then provide a thorough comparative analysis of seven pairs of contrast settings for pruning (e.g., unstructured/structured) and explore emerging topics, including post-training pruning, different levels of supervision for pruning, and broader applications (e.g., adversarial robustness) to shed light on the commonalities and differences of existing methods and lay the foundation for further method development. To facilitate future research, we build a curated collection of datasets, networks, and evaluations on different applications. Finally, we provide some valuable recommendations on selecting pruning methods and prospect promising research directions. We build a repository at https://github.com/hrcheng1066/awesome-pruning.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438471711
Document Type :
Electronic Resource