Back to Search Start Over

UniKW-AT: Unified Keyword Spotting and Audio Tagging

Authors :
Dinkel, Heinrich
Wang, Yongqing
Yan, Zhiyong
Zhang, Junbo
Wang, Yujun
Publication Year :
2022

Abstract

Within the audio research community and the industry, keyword spotting (KWS) and audio tagging (AT) are seen as two distinct tasks and research fields. However, from a technical point of view, both of these tasks are identical: they predict a label (keyword in KWS, sound event in AT) for some fixed-sized input audio segment. This work proposes UniKW-AT: An initial approach for jointly training both KWS and AT. UniKW-AT enhances the noise-robustness for KWS, while also being able to predict specific sound events and enabling conditional wake-ups on sound events. Our approach extends the AT pipeline with additional labels describing the presence of a keyword. Experiments are conducted on the Google Speech Commands V1 (GSCV1) and the balanced Audioset (AS) datasets. The proposed MobileNetV2 model achieves an accuracy of 97.53% on the GSCV1 dataset and an mAP of 33.4 on the AS evaluation set. Further, we show that significant noise-robustness gains can be observed on a real-world KWS dataset, greatly outperforming standard KWS approaches. Our study shows that KWS and AT can be merged into a single framework without significant performance degradation.<br />Comment: Accepted in Interspeech2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2209.11377
Document Type :
Working Paper
Full Text :
https://doi.org/10.21437/Interspeech.2022-607