Back to Search Start Over

FunAudioLLM: Voice Understanding and Generation Foundation Models for Natural Interaction Between Humans and LLMs

Authors :
An, Keyu
Chen, Qian
Deng, Chong
Du, Zhihao
Gao, Changfeng
Gao, Zhifu
Gu, Yue
He, Ting
Hu, Hangrui
Hu, Kai
Ji, Shengpeng
Li, Yabin
Li, Zerui
Lu, Heng
Luo, Haoneng
Lv, Xiang
Ma, Bin
Ma, Ziyang
Ni, Chongjia
Song, Changhe
Shi, Jiaqi
Shi, Xian
Wang, Hao
Wang, Wen
Wang, Yuxuan
Xiao, Zhangyu
Yan, Zhijie
Yang, Yexin
Zhang, Bin
Zhang, Qinglin
Zhang, Shiliang
Zhao, Nan
Zheng, Siqi
Publication Year :
2024

Abstract

This report introduces FunAudioLLM, a model family designed to enhance natural voice interactions between humans and large language models (LLMs). At its core are two innovative models: SenseVoice, which handles multilingual speech recognition, emotion recognition, and audio event detection; and CosyVoice, which facilitates natural speech generation with control over multiple languages, timbre, speaking style, and speaker identity. SenseVoice-Small delivers exceptionally low-latency ASR for 5 languages, and SenseVoice-Large supports high-precision ASR for over 50 languages, while CosyVoice excels in multi-lingual voice generation, zero-shot in-context learning, cross-lingual voice cloning, and instruction-following capabilities. The models related to SenseVoice and CosyVoice have been open-sourced on Modelscope and Huggingface, along with the corresponding training, inference, and fine-tuning codes released on GitHub. By integrating these models with LLMs, FunAudioLLM enables applications such as speech-to-speech translation, emotional voice chat, interactive podcasts, and expressive audiobook narration, thereby pushing the boundaries of voice interaction technology. Demos are available at https://fun-audio-llm.github.io, and the code can be accessed at https://github.com/FunAudioLLM.<br />Comment: Work in progress. Authors are listed in alphabetical order by family name

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.04051
Document Type :
Working Paper