Back to Search Start Over

Learning to Animate Images from A Few Videos to Portray Delicate Human Actions

Authors :
Li, Haoxin
Yu, Yingchen
Wu, Qilong
Zhang, Hanwang
Li, Boyang
Bai, Song
Publication Year :
2025

Abstract

Despite recent progress, video generative models still struggle to animate human actions from static images, particularly when handling uncommon actions whose training data are limited. In this paper, we investigate the task of learning to animate human actions from a small number of videos -- 16 or fewer -- which is highly valuable in real-world applications like video and movie production. Few-shot learning of generalizable motion patterns while ensuring smooth transitions from the initial reference image is exceedingly challenging. We propose FLASH (Few-shot Learning to Animate and Steer Humans), which improves motion generalization by aligning motion features and inter-frame correspondence relations between videos that share the same motion but have different appearances. This approach minimizes overfitting to visual appearances in the limited training data and enhances the generalization of learned motion patterns. Additionally, FLASH extends the decoder with additional layers to compensate lost details in the latent space, fostering smooth transitions from the initial reference image. Experiments demonstrate that FLASH effectively animates images with unseen human or scene appearances into specified actions while maintaining smooth transitions from the reference image.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2503.00276
Document Type :
Working Paper