Back to Search Start Over

Conditional Automated Channel Pruning for Deep Neural Networks

Authors :
Liu, Yixin
Guo, Yong
Liu, Zichang
Liu, Haohua
Zhang, Jingjie
Chen, Zejun
Liu, Jing
Chen, Jian
Publication Year :
2020

Abstract

Model compression aims to reduce the redundancy of deep networks to obtain compact models. Recently, channel pruning has become one of the predominant compression methods to deploy deep models on resource-constrained devices. Most channel pruning methods often use a fixed compression rate for all the layers of the model, which, however, may not be optimal. To address this issue, given a target compression rate for the whole model, one can search for the optimal compression rate for each layer. Nevertheless, these methods perform channel pruning for a specific target compression rate. When we consider multiple compression rates, they have to repeat the channel pruning process multiple times, which is very inefficient yet unnecessary. To address this issue, we propose a Conditional Automated Channel Pruning(CACP) method to obtain the compressed models with different compression rates through single channel pruning process. To this end, we develop a conditional model that takes an arbitrary compression rate as input and outputs the corresponding compressed model. In the experiments, the resultant models with different compression rates consistently outperform the models compressed by existing methods with a channel pruning process for each target compression rate.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2009.09724
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/LSP.2021.3088323