1. Multi-modality image fusion using fuzzy set theory and compensation dictionary learning.
- Author
-
Jie, Yuchan, Li, Xiaosong, Tan, Tianshu, Yang, Lemiao, and Wang, Mingyi
- Abstract
• We propose a fusion method base on fuzzy set theory and compensation dictionary. • A new SVB-LS edge-preserving filter is proposed for image decomposition. • A fuzzy inference-based detection rule is proposed to reduce boundary artifacts. • We design an information compensation learning model to enhance visualization. • Our method yields cutting-edge fusion, detection, and segmentation performance. Multi-modality image fusion aims to integrate complementary information from different modalities to produce superior images for advanced visual tasks. Existing fusion methods often struggle with effectively extracting edge and details, especially under complex conditions where images contain mixed background-target information or are degraded by subtle noise. This often results in incomplete or blurred representation of edge and other detail information. To address these challenges, we propose FDFusion, a novel fusion approach leveraging fuzzy set theory and compensation dictionary learning. First, we introduce the SVB-LS filter, a novel tool to achieve image smoothing while preserving edges and global structures simultaneously. This filter plays a crucial role in achieving "structure-background" decomposition, which is essential for enhancing the extraction of cross-modal information during the fusion process. Also, to preserve significant edges from the source images, we propose a structure layer saliency fusion rule utilizing a fuzzy inference system. Additionally, for non-salient structures, we introduce an intuitionistic fuzzy set similarity measure designed to comprehensively capture the membership, non-membership, and hesitation data, which are critical for managing the complex textures in non-salient detail features. Furthermore, to counteract the loss of detail resulting from the decomposition and reconstruction phases of the fusion process, we develop a compensation dictionary learning strategy aimed at enhancing the visibility and clarity of the fusion results. Extensive experiments demonstrate that FDFusion outperforms the state-of-the-art methods in terms of fusion performance and exhibits superior adaptability to complex scenes. Additionally, it also excels in segmentation and detection tasks, showcasing its board applicative potential. The source code is available at https://github.com/JEI981214/FDFusion. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF