Back to Search Start Over

FastSeq: Make Sequence Generation Faster

Authors :
Yan, Yu
Hu, Fei
Chen, Jiusheng
Bhendawade, Nikhil
Ye, Ting
Gong, Yeyun
Duan, Nan
Cui, Desheng
Chi, Bingyu
Zhang, Ruofei
Publication Year :
2021

Abstract

Transformer-based models have made tremendous impacts in natural language generation. However the inference speed is a bottleneck due to large model size and intensive computing involved in auto-regressive decoding process. We develop FastSeq framework to accelerate sequence generation without accuracy loss. The proposed optimization techniques include an attention cache optimization, an efficient algorithm for detecting repeated n-grams, and an asynchronous generation pipeline with parallel I/O. These optimizations are general enough to be applicable to Transformer-based models (e.g., T5, GPT2, and UniLM). Our benchmark results on a set of widely used and diverse models demonstrate 4-9x inference speed gain. Additionally, FastSeq is easy to use with a simple one-line code change. The source code is available at https://github.com/microsoft/fastseq.<br />Comment: ACL 2021 Demo Track

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2106.04718
Document Type :
Working Paper