Back to Search Start Over

Frozen Large Language Models Can Perceive Paralinguistic Aspects of Speech

Authors :
Kang, Wonjune
Jia, Junteng
Wu, Chunyang
Zhou, Wei
Lakomkin, Egor
Gaur, Yashesh
Sari, Leda
Kim, Suyoun
Li, Ke
Mahadeokar, Jay
Kalinli, Ozlem
Publication Year :
2024

Abstract

As speech becomes an increasingly common modality for interacting with large language models (LLMs), it is becoming desirable to develop systems where LLMs can take into account users' emotions or speaking styles when providing their responses. In this work, we study the potential of an LLM to understand these aspects of speech without fine-tuning its weights. To do this, we utilize an end-to-end system with a speech encoder; the encoder is trained to produce token embeddings such that the LLM's response to an expressive speech prompt is aligned with its response to a semantically matching text prompt where the speaker's emotion has also been specified. We find that this training framework allows the encoder to generate tokens that capture both semantic and paralinguistic information in speech and effectively convey it to the LLM, even when the LLM remains completely frozen. We also explore training on additional emotion and style-related response alignment tasks, finding that they further increase the amount of paralinguistic information explicitly captured in the speech tokens. Experiments demonstrate that our system is able to produce higher quality and more empathetic responses to expressive speech prompts compared to several baselines.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.01162
Document Type :
Working Paper