1. One Wide Feedforward is All You Need
- Author
-
Pires, Telmo Pessoa, Lopes, António V., Assogba, Yannick, and Setiawan, Hendra
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
The Transformer architecture has two main non-embedding components: Attention and the Feed Forward Network (FFN). Attention captures interdependencies between words regardless of their position, while the FFN non-linearly transforms each input token independently. In this work we explore the role of the FFN, and find that despite taking up a significant fraction of the model's parameters, it is highly redundant. Concretely, we are able to substantially reduce the number of parameters with only a modest drop in accuracy by removing the FFN on the decoder layers and sharing a single FFN across the encoder. Finally we scale this architecture back to its original size by increasing the hidden dimension of the shared FFN, achieving substantial gains in both accuracy and latency with respect to the original Transformer Big., Comment: Accepted at WMT23 (EMNLP 2023)
- Published
- 2023