Back to Search Start Over

Flow-Guided Sparse Transformer for Video Deblurring

Authors :
Lin, Jing
Cai, Yuanhao
Hu, Xiaowan
Wang, Haoqian
Yan, Youliang
Zou, Xueyi
Ding, Henghui
Zhang, Yulun
Timofte, Radu
Van Gool, Luc
Publication Year :
2022

Abstract

Exploiting similar and sharper scene patches in spatio-temporal neighborhoods is critical for video deblurring. However, CNN-based methods show limitations in capturing long-range dependencies and modeling non-local self-similarity. In this paper, we propose a novel framework, Flow-Guided Sparse Transformer (FGST), for video deblurring. In FGST, we customize a self-attention module, Flow-Guided Sparse Window-based Multi-head Self-Attention (FGSW-MSA). For each $query$ element on the blurry reference frame, FGSW-MSA enjoys the guidance of the estimated optical flow to globally sample spatially sparse yet highly related $key$ elements corresponding to the same scene patch in neighboring frames. Besides, we present a Recurrent Embedding (RE) mechanism to transfer information from past frames and strengthen long-range temporal dependencies. Comprehensive experiments demonstrate that our proposed FGST outperforms state-of-the-art (SOTA) methods on both DVD and GOPRO datasets and even yields more visually pleasing results in real video deblurring. Code and pre-trained models are publicly available at https://github.com/linjing7/VR-Baseline<br />Comment: ICML 2022; The First Transformer-based method for Video Deblurring

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2201.01893
Document Type :
Working Paper