Back to Search Start Over

Towards Accurate Post-Training Quantization for Vision Transformer

Authors :
Ding, Yifu
Qin, Haotong
Yan, Qinghua
Chai, Zhenhua
Liu, Junjie
Wei, Xiaolin
Liu, Xianglong
Publication Year :
2023

Abstract

Vision transformer emerges as a potential architecture for vision tasks. However, the intense computation and non-negligible delay hinder its application in the real world. As a widespread model compression technique, existing post-training quantization methods still cause severe performance drops. We find the main reasons lie in (1) the existing calibration metric is inaccurate in measuring the quantization influence for extremely low-bit representation, and (2) the existing quantization paradigm is unfriendly to the power-law distribution of Softmax. Based on these observations, we propose a novel Accurate Post-training Quantization framework for Vision Transformer, namely APQ-ViT. We first present a unified Bottom-elimination Blockwise Calibration scheme to optimize the calibration metric to perceive the overall quantization disturbance in a blockwise manner and prioritize the crucial quantization errors that influence more on the final output. Then, we design a Matthew-effect Preserving Quantization for Softmax to maintain the power-law character and keep the function of the attention mechanism. Comprehensive experiments on large-scale classification and detection datasets demonstrate that our APQ-ViT surpasses the existing post-training quantization methods by convincing margins, especially in lower bit-width settings (e.g., averagely up to 5.17% improvement for classification and 24.43% for detection on W4A4). We also highlight that APQ-ViT enjoys versatility and works well on diverse transformer variants.<br />Comment: 9 pages, 5 figures, accepted by ACM Multimedia 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2303.14341
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3503161.3547826