Back to Search Start Over

Controllable Longer Image Animation with Diffusion Models

Authors :
Wang, Qiang
Liu, Minghua
Hu, Junjun
Jiang, Fan
Xu, Mu
Wang, Qiang
Liu, Minghua
Hu, Junjun
Jiang, Fan
Xu, Mu
Publication Year :
2024

Abstract

Generating realistic animated videos from static images is an important area of research in computer vision. Methods based on physical simulation and motion prediction have achieved notable advances, but they are often limited to specific object textures and motion trajectories, failing to exhibit highly complex environments and physical dynamics. In this paper, we introduce an open-domain controllable image animation method using motion priors with video diffusion models. Our method achieves precise control over the direction and speed of motion in the movable region by extracting the motion field information from videos and learning moving trajectories and strengths. Current pretrained video generation models are typically limited to producing very short videos, typically less than 30 frames. In contrast, we propose an efficient long-duration video generation method based on noise reschedule specifically tailored for image animation tasks, facilitating the creation of videos over 100 frames in length while maintaining consistency in content scenery and motion coordination. Specifically, we decompose the denoise process into two distinct phases: the shaping of scene contours and the refining of motion details. Then we reschedule the noise to control the generated frame sequences maintaining long-distance noise correlation. We conducted extensive experiments with 10 baselines, encompassing both commercial tools and academic methodologies, which demonstrate the superiority of our method. Our project page: https://wangqiang9.github.io/Controllable.github.io<br />Comment: https://wangqiang9.github.io/Controllable.github.io

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438560943
Document Type :
Electronic Resource