Back to Search
Start Over
Face Reenactment Based on Unsupervised Motion Transfer and Video Correction.
- Source :
- Journal of Computer Engineering & Applications; Oct2023, Vol. 59 Issue 19, p192-200, 9p
- Publication Year :
- 2023
-
Abstract
- Face reenactment aims to transfer the upper body motions from a driving actor to a target actor. Current methods either cannot transfer motion adequately or cannot synthesize high-quality video. This paper proposes a novel face reenactment method via unsupervised motion transfer and deep learning-based correction. Firstly, the motion of the driving actor is largely transferred to the target via an unsupervised motion model and a rough synthetic target video can be obtained. Then, a generative neural network with spatial-temporal structure is designed to correct the rough video to a realistic and smooth video. To synthesize smooth and detailed video, 3D convolution and attention mechanism are introduced into the network to process temporal information and guide the video correction. To avoid synthesizing background with artifacts, the background information is embedded into the network as fixed parameters. To improve the realism of the teeth, a mouth enhancement loss is designed. The network is trained in an adversarial manner, ensuring the realism of the generated images. Experiments show that this method can synthesize high-quality target videos and the performance is better than the current state-of-the-art face reenactment methods. [ABSTRACT FROM AUTHOR]
- Subjects :
- GENERATIVE adversarial networks
VIDEOS
Subjects
Details
- Language :
- Chinese
- ISSN :
- 10028331
- Volume :
- 59
- Issue :
- 19
- Database :
- Complementary Index
- Journal :
- Journal of Computer Engineering & Applications
- Publication Type :
- Academic Journal
- Accession number :
- 172996956
- Full Text :
- https://doi.org/10.3778/j.issn.1002-8331.2205-0293