Back to Search Start Over

Self-Attentional Acoustic Models

Authors :
Sperber, Matthias
Niehues, Jan
Neubig, Graham
Stüker, Sebastian
Waibel, Alex
Publication Year :
2018

Abstract

Self-attention is a method of encoding sequences of vectors by relating these vectors to each-other based on pairwise similarities. These models have recently shown promising results for modeling discrete sequences, but they are non-trivial to apply to acoustic modeling due to computational and modeling issues. In this paper, we apply self-attention to acoustic modeling, proposing several improvements to mitigate these issues: First, self-attention memory grows quadratically in the sequence length, which we address through a downsampling technique. Second, we find that previous approaches to incorporate position information into the model are unsuitable and explore other representations and hybrid models to this end. Third, to stress the importance of local context in the acoustic signal, we propose a Gaussian biasing approach that allows explicit control over the context range. Experiments find that our model approaches a strong baseline based on LSTMs with network-in-network connections while being much faster to compute. Besides speed, we find that interpretability is a strength of self-attentional acoustic models, and demonstrate that self-attention heads learn a linguistically plausible division of labor.<br />Comment: Published at Interspeech 2018

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1803.09519
Document Type :
Working Paper