Back to Search Start Over

MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Dataset

Authors :
Sung-Bin, Kim
Chae-Yeon, Lee
Son, Gihun
Hyun-Bin, Oh
Ju, Janghoon
Nam, Suekyeong
Oh, Tae-Hyun
Publication Year :
2024

Abstract

Recent studies in speech-driven 3D talking head generation have achieved convincing results in verbal articulations. However, generating accurate lip-syncs degrades when applied to input speech in other languages, possibly due to the lack of datasets covering a broad spectrum of facial movements across languages. In this work, we introduce a novel task to generate 3D talking heads from speeches of diverse languages. We collect a new multilingual 2D video dataset comprising over 420 hours of talking videos in 20 languages. With our proposed dataset, we present a multilingually enhanced model that incorporates language-specific style embeddings, enabling it to capture the unique mouth movements associated with each language. Additionally, we present a metric for assessing lip-sync accuracy in multilingual settings. We demonstrate that training a 3D talking head model with our proposed dataset significantly enhances its multilingual performance. Codes and datasets are available at https://multi-talk.github.io/.<br />Comment: Interspeech 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.14272
Document Type :
Working Paper