Back to Search Start Over

EmoTalk: Speech-driven emotional disentanglement for 3D face animation

Authors :
Peng, Ziqiao
Wu, Haoyu
Song, Zhenbo
Xu, Hao
Zhu, Xiangyu
Liu, Hongyan
He, Jun
Fan, Zhaoxin
Publication Year :
2023

Abstract

Speech-driven 3D face animation aims to generate realistic facial expressions that match the speech content and emotion. However, existing methods often neglect emotional facial expressions or fail to disentangle them from speech content. To address this issue, this paper proposes an end-to-end neural network to disentangle different emotions in speech so as to generate rich 3D facial expressions. Specifically, we introduce the emotion disentangling encoder (EDE) to disentangle the emotion and content in the speech by cross-reconstructed speech signals with different emotion labels. Then an emotion-guided feature fusion decoder is employed to generate a 3D talking face with enhanced emotion. The decoder is driven by the disentangled identity, emotional, and content embeddings so as to generate controllable personal and emotional styles. Finally, considering the scarcity of the 3D emotional talking face data, we resort to the supervision of facial blendshapes, which enables the reconstruction of plausible 3D faces from 2D emotional data, and contribute a large-scale 3D emotional talking face dataset (3D-ETF) to train the network. Our experiments and user studies demonstrate that our approach outperforms state-of-the-art methods and exhibits more diverse facial movements. We recommend watching the supplementary video: https://ziqiaopeng.github.io/emotalk

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....9e3ff69a6e4fca3a54b35970abdc46fd