Back to Search Start Over

Block-Skim: Efficient Question Answering for Transformer

Authors :
Guan, Yue
Li, Zhengyi
Leng, Jingwen
Lin, Zhouhan
Guo, Minyi
Zhu, Yuhao
Publication Year :
2021

Abstract

Transformer models have achieved promising results on natural language processing (NLP) tasks including extractive question answering (QA). Common Transformer encoders used in NLP tasks process the hidden states of all input tokens in the context paragraph throughout all layers. However, different from other tasks such as sequence classification, answering the raised question does not necessarily need all the tokens in the context paragraph. Following this motivation, we propose Block-skim, which learns to skim unnecessary context in higher hidden layers to improve and accelerate the Transformer performance. The key idea of Block-Skim is to identify the context that must be further processed and those that could be safely discarded early on during inference. Critically, we find that such information could be sufficiently derived from the self-attention weights inside the Transformer model. We further prune the hidden states corresponding to the unnecessary positions early in lower layers, achieving significant inference-time speedup. To our surprise, we observe that models pruned in this way outperform their full-size counterparts. Block-Skim improves QA models' accuracy on different datasets and achieves 3 times speedup on BERT-base model.<br />Comment: Published as a conference paper at AAAI 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2112.08560
Document Type :
Working Paper