1. Multimodal image fusion via coupled feature learning
- Author
-
Farshad G. Veshki, Nora Ouzir, Sergiy A. Vorobyov, Esa Ollila, Sergiy Vorobyov Group, Université Paris-Saclay, Dept Signal Process and Acoust, Aalto-yliopisto, Aalto University, Centre de vision numérique (CVN), Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay, OPtimisation Imagerie et Santé (OPIS), Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de vision numérique (CVN), Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-CentraleSupélec-Université Paris-Saclay, and School of Electrical Engineering [Aalto Univ]
- Subjects
Coupled dictionary learning ,Multimodal image fusion ,Joint sparse representation ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,Multimodal medical imaging ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,Control and Systems Engineering ,Signal Processing ,Computer Vision and Pattern Recognition ,[INFO.INFO-BI]Computer Science [cs]/Bioinformatics [q-bio.QM] ,Electrical and Electronic Engineering ,Software ,Infrared images - Abstract
Publisher Copyright: © 2022 The Author(s) This paper presents a multimodal image fusion method using a novel decomposition model based on coupled dictionary learning. The proposed method is general and can be used for a variety of imaging modalities. In particular, the images to be fused are decomposed into correlated and uncorrelated components using sparse representations with identical supports and a Pearson correlation constraint, respectively. The resulting optimization problem is solved by an alternating minimization algorithm. Contrary to other learning-based fusion methods, the proposed approach does not require any training data, and the correlated features are extracted online from the data itself. By preserving the uncorrelated components in the fused images, the proposed fusion method significantly improves on current fusion approaches in terms of maintaining the texture details and modality-specific information. The maximum-absolute-value rule is used for the fusion of correlated components only. This leads to an enhanced contrast-resolution without causing intensity attenuation or loss of important information. Experimental results show that the proposed method achieves superior performance in terms of both visual and objective evaluations compared to state-of-the-art image fusion methods.
- Published
- 2022