Back to Search Start Over

Human Pose Transfer with Augmented Disentangled Feature Consistency.

Authors :
KUN WU
CHENGXIANG YIN
ZHENGPING CHE
BO JIANG
JIAN TANG
ZHENG GUAN
GANGYI DING
Source :
ACM Transactions on Intelligent Systems & Technology; Feb2024, Vol. 15 Issue 1, p1-22, 22p
Publication Year :
2024

Abstract

Deep generative models have made great progress in synthesizing images with arbitrary human poses and transferring the poses of one person to others. Though many different methods have been proposed to generate images with high visual fidelity, the main challenge remains and comes from two fundamental issues: pose ambiguity and appearance inconsistency. To alleviate the current limitations and improve the quality of the synthesized images, we propose a pose transfer network with augmented Disentangled Feature Consistency (DFC-Net) to facilitate human pose transfer. Given a pair of images containing the source and target person, DFC-Net extracts pose and static information from the source and target respectively, then synthesizes an image of the target person with the desired pose from the source. Moreover, DFC-Net leverages disentangled feature consistency losses in the adversarial training to strengthen the transfer coherence and integrates a keypoint amplifier to enhance the pose feature extraction. With the help of the disentangled feature consistency losses, we further propose a novel data augmentation scheme that introduces unpaired support data with the augmented consistency constraints to improve the generality and robustness of DFC-Net. Extensive experimental results on Mixamo-Pose and EDN-10k have demonstrated DFC-Net achieves state-of-the-art performance on pose transfer. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
21576904
Volume :
15
Issue :
1
Database :
Complementary Index
Journal :
ACM Transactions on Intelligent Systems & Technology
Publication Type :
Academic Journal
Accession number :
174955413
Full Text :
https://doi.org/10.1145/3626241