Back to Search Start Over

End-to-End Supermask Pruning: Learning to Prune Image Captioning Models.

Authors :
Tan, Jia Huei
Chan, Chee Seng
Chuah, Joon Huang
Source :
Pattern Recognition. Feb2022, Vol. 122, pN.PAG-N.PAG. 1p.
Publication Year :
2022

Abstract

• This is the first extensive attempt at exploring model pruning for image captioning task. Empirically, we show that deep captioning networks with 80% to 95% sparse are capable to either match or even slightly outperform their dense counterparts. In addition, we propose a pruning method - Supermask Pruning (SMP) that performs continuous and gradual sparsification during training stage based on parameter sensitivity in an end-to-end fashion. • We investigate an ideal way to combine pruning with fine-tuning of pre-trained CNN, and show that both decoder pruning and training should be done before pruning the encoder. • We release the pre-trained sparse models for UD and ORT that are capable of achieving CIDEr scores >120 on MS-COCO dataset; yet are only 8.7 MB (reduction of 96% compared to dense UD) and 14.5 MB (reduction of 94% compared to dense ORT) in model size. Our code and pre-trained models are publicly available at https://github.com/jiahuei/sparse-image-captioning With the advancement of deep models, research work on image captioning has led to a remarkable gain in raw performance over the last decade, along with increasing model complexity and computational cost. However, surprisingly works on compression of deep networks for image captioning task has received little to no attention. For the first time in image captioning research, we provide an extensive comparison of various unstructured weight pruning methods on three different popular image captioning architectures, namely Soft-Attention, Up-Down and Object Relation Transformer. Following this, we propose a novel end-to-end weight pruning method that performs gradual sparsification based on weight sensitivity to the training loss. The pruning schemes are then extended with encoder pruning, where we show that conducting both decoder pruning and training simultaneously prior to the encoder pruning provides good overall performance. Empirically, we show that an 80% to 95% sparse network (up to 75% reduction in model size) can either match or outperform its dense counterpart. The code and pre-trained models for Up-Down and Object Relation Transformer that are capable of achieving CIDEr scores > 120 on the MS-COCO dataset but with only 8.7 MB and 14.5 MB in model size (size reduction of 96% and 94% respectively against dense versions) are publicly available at https://github.com/jiahuei/sparse-image-captioning. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00313203
Volume :
122
Database :
Academic Search Index
Journal :
Pattern Recognition
Publication Type :
Academic Journal
Accession number :
153325225
Full Text :
https://doi.org/10.1016/j.patcog.2021.108366