Back to Search Start Over

Few-Shot Spoken Language Understanding via Joint Speech-Text Models

Authors :
Chien, Chung-Ming
Zhang, Mingjiamei
Chou, Ju-Chieh
Livescu, Karen
Publication Year :
2023

Abstract

Recent work on speech representation models jointly pre-trained with text has demonstrated the potential of improving speech representations by encoding speech and text in a shared space. In this paper, we leverage such shared representations to address the persistent challenge of limited data availability in spoken language understanding tasks. By employing a pre-trained speech-text model, we find that models fine-tuned on text can be effectively transferred to speech testing data. With as little as 1 hour of labeled speech data, our proposed approach achieves comparable performance on spoken language understanding tasks (specifically, sentiment analysis and named entity recognition) when compared to previous methods using speech-only pre-trained models fine-tuned on 10 times more data. Beyond the proof-of-concept study, we also analyze the latent representations. We find that the bottom layers of speech-text models are largely task-agnostic and align speech and text representations into a shared space, while the top layers are more task-specific.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.05919
Document Type :
Working Paper