Back to Search Start Over

DNN driven Speaker Independent Audio-Visual Mask Estimation for Speech Separation

Authors :
Gogate, Mandar
Adeel, Ahsan
Marxer, Ricard
Barker, Jon
Hussain, Amir
Publication Year :
2018

Abstract

Human auditory cortex excels at selectively suppressing background noise to focus on a target speaker. The process of selective attention in the brain is known to contextually exploit the available audio and visual cues to better focus on target speaker while filtering out other noises. In this study, we propose a novel deep neural network (DNN) based audiovisual (AV) mask estimation model. The proposed AV mask estimation model contextually integrates the temporal dynamics of both audio and noise-immune visual features for improved mask estimation and speech separation. For optimal AV features extraction and ideal binary mask (IBM) estimation, a hybrid DNN architecture is exploited to leverages the complementary strengths of a stacked long short term memory (LSTM) and convolution LSTM network. The comparative simulation results in terms of speech quality and intelligibility demonstrate significant performance improvement of our proposed AV mask estimation model as compared to audio-only and visual-only mask estimation approaches for both speaker dependent and independent scenarios.<br />Comment: Accepted for Interspeech 2018, 5 pages, 4 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1808.00060
Document Type :
Working Paper
Full Text :
https://doi.org/10.21437/Interspeech.2018-2516