Back to Search Start Over

SpeechNet: A Universal Modularized Model for Speech Processing Tasks

Authors :
Chen, Yi-Chen
Chi, Po-Han
Yang, Shu-wen
Chang, Kai-Wei
Lin, Jheng-hao
Huang, Sung-Feng
Liu, Da-Rong
Liu, Chi-Liang
Lee, Cheng-Kuang
Lee, Hung-yi
Publication Year :
2021

Abstract

There is a wide variety of speech processing tasks ranging from extracting content information from speech signals to generating speech signals. For different tasks, model networks are usually designed and tuned separately. If a universal model can perform multiple speech processing tasks, some tasks might be improved with the related abilities learned from other tasks. The multi-task learning of a wide variety of speech processing tasks with a universal model has not been studied. This paper proposes a universal modularized model, SpeechNet, which treats all speech processing tasks into a speech/text input and speech/text output format. We select five essential speech processing tasks for multi-task learning experiments with SpeechNet. We show that SpeechNet learns all of the above tasks, and we further analyze which tasks can be improved by other tasks. SpeechNet is modularized and flexible for incorporating more modules, tasks, or training approaches in the future. We release the code and experimental settings to facilitate the research of modularized universal models and multi-task learning of speech processing tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2105.03070
Document Type :
Working Paper