Back to Search Start Over

Router-Tuning: A Simple and Effective Approach for Enabling Dynamic-Depth in Transformers

Authors :
He, Shwai
Ge, Tao
Sun, Guoheng
Tian, Bowei
Wang, Xiaoyang
Li, Ang
Yu, Dong
Publication Year :
2024

Abstract

Traditional transformer models often allocate a fixed amount of computational resources to every input token, leading to inefficient and unnecessary computation. To address this, the Mixture of Depths (MoD) was introduced to dynamically adjust the computational depth by skipping less important layers. Despite its promise, current MoD approaches remain under-explored and face two main challenges: (1) \textit{high training costs due to the need to train the entire model along with the routers that determine which layers to skip}, and (2) \textit{the risk of performance degradation when important layers are bypassed}. In response to the first issue, we propose Router-Tuning, a method that fine-tunes only the router on a small dataset, drastically reducing the computational overhead associated with full model training. For the second challenge, we propose MindSkip, which deploys \textit{Attention with Dynamic Depths}. This method preserves the model's performance while significantly enhancing computational and memory efficiency. Extensive experiments demonstrate that our approach delivers competitive results while dramatically improving the computation efficiency, e.g., 21\% speedup and only a 0.2\% performance drop. The code is released at \url{https://github.com/CASE-Lab-UMD/Router-Tuning}.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.13184
Document Type :
Working Paper