Back to Search Start Over

A Task-guided, Implicitly-searched and Meta-initialized Deep Model for Image Fusion

Authors :
Liu, Risheng
Liu, Zhu
Liu, Jinyuan
Fan, Xin
Luo, Zhongxuan
Publication Year :
2023

Abstract

Image fusion plays a key role in a variety of multi-sensor-based vision systems, especially for enhancing visual quality and/or extracting aggregated features for perception. However, most existing methods just consider image fusion as an individual task, thus ignoring its underlying relationship with these downstream vision problems. Furthermore, designing proper fusion architectures often requires huge engineering labor. It also lacks mechanisms to improve the flexibility and generalization ability of current fusion approaches. To mitigate these issues, we establish a Task-guided, Implicit-searched and Meta-initialized (TIM) deep model to address the image fusion problem in a challenging real-world scenario. Specifically, we first propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion. Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency. In addition, a pretext meta initialization technique is introduced to leverage divergence fusion data to support fast adaptation for different kinds of image fusion tasks. Qualitative and quantitative experimental results on different categories of image fusion problems and related downstream tasks (e.g., visual enhancement and semantic understanding) substantiate the flexibility and effectiveness of our TIM. The source code will be available at https://github.com/LiuZhu-CV/TIMFusion.<br />Comment: 16 pages, 12 figures, Codes are available at https://github.com/LiuZhu-CV/TIMFusion

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.15862
Document Type :
Working Paper