Back to Search
Start Over
Faster Inference of Integer SWIN Transformer by Removing the GELU Activation
- Publication Year :
- 2024
-
Abstract
- SWIN transformer is a prominent vision transformer model that has state-of-the-art accuracy in image classification tasks. Despite this success, its unique architecture causes slower inference compared with similar deep neural networks. Integer quantization of the model is one of the methods used to improve its inference latency. However, state-of-the-art has not been able to fully quantize the model. In this work, we improve upon the inference latency of the state-of-the-art methods by removing the floating-point operations, which are associated with the GELU activation in Swin Transformer. While previous work proposed to replace the non-integer operations with linear approximation functions, we propose to replace GELU with ReLU activation. The advantage of ReLU over previous methods is its low memory and computation complexity. We use iterative knowledge distillation to compensate for the lost accuracy due to replacing GELU with ReLU. We quantize our GELU-less SWIN transformer and show that on an RTX 4090 NVIDIA GPU we can improve the inference latency of the quantized SWIN transformer by at least $11\%$ while maintaining an accuracy drop of under $0.5\%$ on the ImageNet evaluation dataset.<br />Comment: 5 pages, 1 figure. Submitted to Edge Intelligence Workshop III, an AAAI 2024 workshop
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2402.01169
- Document Type :
- Working Paper