Back to Search Start Over

Animation from Blur: Multi-modal Blur Decomposition with Motion Guidance

Authors :
Zhong, Zhihang
Sun, Xiao
Wu, Zhirong
Zheng, Yinqiang
Lin, Stephen
Sato, Imari
Publication Year :
2022

Abstract

We study the challenging problem of recovering detailed motion from a single motion-blurred image. Existing solutions to this problem estimate a single image sequence without considering the motion ambiguity for each region. Therefore, the results tend to converge to the mean of the multi-modal possibilities. In this paper, we explicitly account for such motion ambiguity, allowing us to generate multiple plausible solutions all in sharp detail. The key idea is to introduce a motion guidance representation, which is a compact quantization of 2D optical flow with only four discrete motion directions. Conditioned on the motion guidance, the blur decomposition is led to a specific, unambiguous solution by using a novel two-stage decomposition network. We propose a unified framework for blur decomposition, which supports various interfaces for generating our motion guidance, including human input, motion information from adjacent video frames, and learning from a video dataset. Extensive experiments on synthesized datasets and real-world data show that the proposed framework is qualitatively and quantitatively superior to previous methods, and also offers the merit of producing physically plausible and diverse solutions. Code is available at https://github.com/zzh-tech/Animation-from-Blur.<br />Comment: ECCV2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2207.10123
Document Type :
Working Paper