Back to Search Start Over

Follow-Your-Pose v2: Multiple-Condition Guided Character Image Animation for Stable Pose Control

Authors :
Xue, Jingyun
Wang, Hongfa
Tian, Qi
Ma, Yue
Wang, Andong
Zhao, Zhiyuan
Min, Shaobo
Zhao, Wenzhe
Zhang, Kaihao
Shum, Heung-Yeung
Liu, Wei
Liu, Mengyang
Luo, Wenhan
Publication Year :
2024

Abstract

Pose-controllable character video generation is in high demand with extensive applications for fields such as automatic advertising and content creation on social media platforms. While existing character image animation methods using pose sequences and reference images have shown promising performance, they tend to struggle with incoherent animation in complex scenarios, such as multiple character animation and body occlusion. Additionally, current methods request large-scale high-quality videos with stable backgrounds and temporal consistency as training datasets, otherwise, their performance will greatly deteriorate. These two issues hinder the practical utilization of character image animation tools. In this paper, we propose a practical and robust framework Follow-Your-Pose v2, which can be trained on noisy open-sourced videos readily available on the internet. Multi-condition guiders are designed to address the challenges of background stability, body occlusion in multi-character generation, and consistency of character appearance. Moreover, to fill the gap of fair evaluation of multi-character pose animation, we propose a new benchmark comprising approximately 4,000 frames. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods by a margin of over 35% across 2 datasets and on 7 metrics. Meanwhile, qualitative assessments reveal a significant improvement in the quality of generated video, particularly in scenarios involving complex backgrounds and body occlusion of multi-character, suggesting the superiority of our approach.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.03035
Document Type :
Working Paper