Back to Search Start Over

Exploiting Attention-based Sequence-to-Sequence Architectures for Sound Event Localization

Authors :
Schymura, Christopher
Ochiai, Tsubasa
Delcroix, Marc
Kinoshita, Keisuke
Nakatani, Tomohiro
Araki, Shoko
Kolossa, Dorothea
Publication Year :
2021

Abstract

Sound event localization frameworks based on deep neural networks have shown increased robustness with respect to reverberation and noise in comparison to classical parametric approaches. In particular, recurrent architectures that incorporate temporal context into the estimation process seem to be well-suited for this task. This paper proposes a novel approach to sound event localization by utilizing an attention-based sequence-to-sequence model. These types of models have been successfully applied to problems in natural language processing and automatic speech recognition. In this work, a multi-channel audio signal is encoded to a latent representation, which is subsequently decoded to a sequence of estimated directions-of-arrival. Herein, attentions allow for capturing temporal dependencies in the audio signal by focusing on specific frames that are relevant for estimating the activity and direction-of-arrival of sound events at the current time-step. The framework is evaluated on three publicly available datasets for sound event localization. It yields superior localization performance compared to state-of-the-art methods in both anechoic and reverberant conditions.<br />Comment: Published in Proceedings of the 28th European Signal Processing Conference (EUSIPCO), 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2103.00417
Document Type :
Working Paper