Back to Search Start Over

Event-based Motion Deblurring via Multi-Temporal Granularity Fusion

Authors :
Lin, Xiaopeng
Ren, Hongwei
Huang, Yulong
Liu, Zunchang
Zhou, Yue
Fu, Haotian
Pan, Biao
Cheng, Bojun
Publication Year :
2024

Abstract

Conventional frame-based cameras inevitably produce blurry effects due to motion occurring during the exposure time. Event camera, a bio-inspired sensor offering continuous visual information could enhance the deblurring performance. Effectively utilizing the high-temporal-resolution event data is crucial for extracting precise motion information and enhancing deblurring performance. However, existing event-based image deblurring methods usually utilize voxel-based event representations, losing the fine-grained temporal details that are mathematically essential for fast motion deblurring. In this paper, we first introduce point cloud-based event representation into the image deblurring task and propose a Multi-Temporal Granularity Network (MTGNet). It combines the spatially dense but temporally coarse-grained voxel-based event representation and the temporally fine-grained but spatially sparse point cloud-based event. To seamlessly integrate such complementary representations, we design a Fine-grained Point Branch. An Aggregation and Mapping Module (AMM) is proposed to align the low-level point-based features with frame-based features and an Adaptive Feature Diffusion Module (AFDM) is designed to manage the resolution discrepancies between event data and image data by enriching the sparse point feature. Extensive subjective and objective evaluations demonstrate that our method outperforms current state-of-the-art approaches on both synthetic and real-world datasets.<br />Comment: 12 pages, 8 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.11866
Document Type :
Working Paper