Back to Search
Start Over
Speech Enhancement Based on Denoising Autoencoder With Multi-Branched Encoders
- Source :
- IEEE/ACM Transactions on Audio, Speech, and Language Processing. 28:2756-2769
- Publication Year :
- 2020
- Publisher :
- Institute of Electrical and Electronics Engineers (IEEE), 2020.
-
Abstract
- Deep learning-based models have greatly advanced the performance of speech enhancement (SE) systems. However, two problems remain unsolved, which are closely related to model generalizability to noisy conditions: (1) mismatched noisy condition during testing, i.e., the performance is generally sub-optimal when models are tested with unseen noise types that are not involved in the training data; (2) local focus on specific noisy conditions, i.e., models trained using multiple types of noises cannot optimally remove a specific noise type even though the noise type has been involved in the training data. These problems are common in real applications. In this article, we propose a novel denoising autoencoder with a multi-branched encoder (termed DAEME) model to deal with these two problems. In the DAEME model, two stages are involved: training and testing. In the training stage, we build multiple component models to form a multi-branched encoder based on a decision tree (DSDT). The DSDT is built based on prior knowledge of speech and noisy conditions (the speaker, environment, and signal factors are considered in this paper), where each component of the multi-branched encoder performs a particular mapping from noisy to clean speech along the branch in the DSDT. Finally, a decoder is trained on top of the multi-branched encoder. In the testing stage, noisy speech is first processed by each component model. The multiple outputs from these models are then integrated into the decoder to determine the final enhanced speech. Experimental results show that DAEME is superior to several baseline models in terms of objective evaluation metrics, automatic speech recognition results, and quality in subjective human listening tests.
- Subjects :
- FOS: Computer and information sciences
Sound (cs.SD)
Acoustics and Ultrasonics
Computer science
Speech recognition
Noise reduction
Computer Science - Sound
Data modeling
030507 speech-language pathology & audiology
03 medical and health sciences
Audio and Speech Processing (eess.AS)
FOS: Electrical engineering, electronic engineering, information engineering
Computer Science (miscellaneous)
Electrical and Electronic Engineering
Noise measurement
business.industry
Deep learning
Speech enhancement
Computational Mathematics
Noise
Artificial intelligence
0305 other medical science
business
Encoder
Decoding methods
Electrical Engineering and Systems Science - Audio and Speech Processing
Subjects
Details
- ISSN :
- 23299304 and 23299290
- Volume :
- 28
- Database :
- OpenAIRE
- Journal :
- IEEE/ACM Transactions on Audio, Speech, and Language Processing
- Accession number :
- edsair.doi.dedup.....7bec75d2995405f94a8d036f16eca68c