Back to Search Start Over

On the Optimization and Generalization of Multi-head Attention

Authors :
Deora, Puneesh
Ghaderi, Rouzbeh
Taheri, Hossein
Thrampoulidis, Christos
Publication Year :
2023

Abstract

The training and generalization dynamics of the Transformer's core mechanism, namely the Attention mechanism, remain under-explored. Besides, existing analyses primarily focus on single-head attention. Inspired by the demonstrated benefits of overparameterization when training fully-connected networks, we investigate the potential optimization and generalization advantages of using multiple attention heads. Towards this goal, we derive convergence and generalization guarantees for gradient-descent training of a single-layer multi-head self-attention model, under a suitable realizability condition on the data. We then establish primitive conditions on the initialization that ensure realizability holds. Finally, we demonstrate that these conditions are satisfied for a simple tokenized-mixture model. We expect the analysis can be extended to various data-model and architecture variations.<br />Comment: 48 page; presented in the Workshop on High-dimensional Learning Dynamics, ICML 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.12680
Document Type :
Working Paper