1. Cross-Identity Motion Transfer for Arbitrary Objects Through Pose-Attentive Video Reassembling
- Author
-
Subin Jeon, Seoung Wug Oh, Seonghyeon Nam, and Seon Joo Kim
- Subjects
Scheme (programming language) ,business.industry ,Computer science ,05 social sciences ,010501 environmental sciences ,Object (computer science) ,01 natural sciences ,Motion (physics) ,0502 economics and business ,Identity (object-oriented programming) ,Source image ,Motion transfer ,Computer vision ,Artificial intelligence ,050207 economics ,Image warping ,business ,Generative adversarial network ,computer ,0105 earth and related environmental sciences ,computer.programming_language - Abstract
We propose an attention-based networks for transferring motions between arbitrary objects. Given a source image(s) and a driving video, our networks animate the subject in the source images according to the motion in the driving video. In our attention mechanism, dense similarities between the learned keypoints in the source and the driving images are computed in order to retrieve the appearance information from the source images. Taking a different approach from the well-studied warping based models, our attention-based model has several advantages. By reassembling non-locally searched pieces from the source contents, our approach can produce more realistic outputs. Furthermore, our system can make use of multiple observations of the source appearance (e.g. front and sides of faces) to make the results more accurate. To reduce the training-testing discrepancy of the self-supervised learning, a novel cross-identity training scheme is additionally introduced. With the training scheme, our networks is trained to transfer motions between different subjects, as in the real testing scenario. Experimental results validate that our method produces visually pleasing results in various object domains, showing better performances compared to previous works.
- Published
- 2020