Back to Search Start Over

Texture clear multi-modal image fusion with joint sparsity model

Authors :
Zhisheng Gao
Chengfang Zhang
Source :
Optik. 130:255-265
Publication Year :
2017
Publisher :
Elsevier BV, 2017.

Abstract

Multi-modal image fusion is necessary for describing a target abundantly. With the consideration of the correlations between multi-source signals and the sparse characteristics of image, this paper proposed a novel fusion rule of multi-modal image fusion scheme based on the joint sparsity model. First, the source image was represented as a shared sparse component and an exclusive sparse component with an over-complete dictionary. Second, the designed novel fusion rule acts on the shared and exclusive sparse coefficients to obtain the fused sparse coefficients. Finally, fused image was reconstructed by the fused sparse coefficients and the dictionary. The proposed approach was tested on the infrared and visual images, medical images. The results were compared with those of traditional methods, such as the multi-scale transform based methods, sparse representation based methods and joint sparsity representation based methods. Experimental results demonstrated that the proposed method outperforms the existing state-of-the-art methods, in terms of better texture clarity. Moreover, the fused image shows better edge consistence and visual effect.

Details

ISSN :
00304026
Volume :
130
Database :
OpenAIRE
Journal :
Optik
Accession number :
edsair.doi...........75a284e80665807c6e5b8138824e3c7d
Full Text :
https://doi.org/10.1016/j.ijleo.2016.09.126