Back to Search Start Over

Dynamic Neural Textures: Generating Talking-Face Videos with Continuously Controllable Expressions

Authors :
Ye, Zipeng
Sun, Zhiyao
Wen, Yu-Hui
Sun, Yanan
Lv, Tian
Yi, Ran
Liu, Yong-Jin
Publication Year :
2022

Abstract

Recently, talking-face video generation has received considerable attention. So far most methods generate results with neutral expressions or expressions that are implicitly determined by neural networks in an uncontrollable way. In this paper, we propose a method to generate talking-face videos with continuously controllable expressions in real-time. Our method is based on an important observation: In contrast to facial geometry of moderate resolution, most expression information lies in textures. Then we make use of neural textures to generate high-quality talking face videos and design a novel neural network that can generate neural textures for image frames (which we called dynamic neural textures) based on the input expression and continuous intensity expression coding (CIEC). Our method uses 3DMM as a 3D model to sample the dynamic neural texture. The 3DMM does not cover the teeth area, so we propose a teeth submodule to complete the details in teeth. Results and an ablation study show the effectiveness of our method in generating high-quality talking-face videos with continuously controllable expressions. We also set up four baseline methods by combining existing representative methods and compare them with our method. Experimental results including a user study show that our method has the best performance.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2204.06180
Document Type :
Working Paper