Back to Search Start Over

Neural Voice Puppetry: Audio-driven Facial Reenactment

Authors :
Thies, Justus
Elgharib, Mohamed
Tewari, Ayush
Theobalt, Christian
Nießner, Matthias
Source :
ECCV2020
Publication Year :
2019

Abstract

We present Neural Voice Puppetry, a novel approach for audio-driven facial video synthesis. Given an audio sequence of a source person or digital assistant, we generate a photo-realistic output video of a target person that is in sync with the audio of the source input. This audio-driven facial reenactment is driven by a deep neural network that employs a latent 3D face model space. Through the underlying 3D representation, the model inherently learns temporal stability while we leverage neural rendering to generate photo-realistic output frames. Our approach generalizes across different people, allowing us to synthesize videos of a target actor with the voice of any unknown source actor or even synthetic voices that can be generated utilizing standard text-to-speech approaches. Neural Voice Puppetry has a variety of use-cases, including audio-driven video avatars, video dubbing, and text-driven video synthesis of a talking head. We demonstrate the capabilities of our method in a series of audio- and text-based puppetry examples, including comparisons to state-of-the-art techniques and a user study.<br />Comment: Video: https://youtu.be/s74_yQiJMXA Project/Demo/Code: https://justusthies.github.io/posts/neural-voice-puppetry/

Details

Database :
arXiv
Journal :
ECCV2020
Publication Type :
Report
Accession number :
edsarx.1912.05566
Document Type :
Working Paper