Back to Search Start Over

Efficient Encoder-Decoder Transformer Decoding for Decomposable Tasks

Authors :
Lu, Bo-Ru
Haduong, Nikita
Lin, Chien-Yu
Cheng, Hao
Smith, Noah A.
Ostendorf, Mari
Publication Year :
2024

Abstract

Transformer-based NLP models are powerful but have high computational costs that limit deployment. Finetuned encoder-decoder models are popular in specialized domains and can outperform larger more generalized decoder-only models, such as GPT-4. We introduce a new configuration for encoder-decoder models that improves efficiency on structured output and decomposable tasks where multiple outputs are required for a single shared input. Our method, prompt-in-decoder (PiD), encodes the input once and decodes the output in parallel, boosting both training and inference efficiency by avoiding duplicate input encoding and increasing the operational intensity (ratio of numbers of arithmetic operation to memory access) of decoding process by sharing the input key-value cache. We achieve computation reduction that roughly scales with the number of subtasks, gaining up to 4.6x speed-up over state-of-the-art models for dialogue state tracking, summarization, and question-answering tasks, with comparable or better performance.<br />Comment: 14 pages, 4 figures. https://github.com/boru-roylu/encode-once-and-decode-in-parallel

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.13112
Document Type :
Working Paper