Back to Search Start Over

Video deblurring via motion compensation and adaptive information fusion.

Authors :
Zhan, Zongqian
Yang, Xue
Li, Yihui
Pang, Chao
Source :
Neurocomputing. May2019, Vol. 341, p88-98. 11p.
Publication Year :
2019

Abstract

Abstract Non-uniform motion blur caused by camera shake or object motion is a common artifact in videos captured by hand-held devices. Recent advances in video deblurring have shown that convolutional neural networks (CNNs) are able to aggregate information from multiple unaligned consecutive frames to generate sharper images. However, without explicit image alignment, most of the existing CNN-based methods often introduce temporal artifacts, especially when the input frames are severely blurred. To this end, we propose a novel video deblurring method to handle spatially varying blur in dynamic scenes. In particular, we introduce a motion estimation and motion compensation module which estimates the optical flow from the blurry images and then warps the previously deblurred frame to restore the current frame. Thus, the previous processing results benefit the restoration of the subsequent frames. This recurrent scheme is able to utilize contextual information efficiently and can facilitate the temporal coherence of the results. Furthermore, to suppress the negative effect of alignment error, we propose an adaptive information fusion module that can filter the temporal information adaptively. The experimental results obtained in this study confirm that the proposed method is both effective and efficient. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
341
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
135709866
Full Text :
https://doi.org/10.1016/j.neucom.2019.03.009