1. Fusion of infrared and visible images via multi-layer convolutional sparse representation
- Author
-
Zhouyu Zhang, Chenyuan He, Hai Wang, Yingfeng Cai, Long Chen, Zhihua Gan, Fenghua Huang, and Yiqun Zhang
- Subjects
Image fusion ,Infrared and visible image ,Convolutional sparse representation (CSR) ,Unmanned aerial vehicle (UAV) ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Infrared and visible image fusion is an effective solution for image quality enhancement. However, conventional fusion models require the decomposition of source images into image blocks, which disrupts the original structure of the images, leading to the loss of detail in the fused images and making the fusion results highly sensitive to matching errors. This paper employs Convolutional Sparse Representation (CSR) to perform global feature transformation on the source images, overcoming the drawbacks of traditional fusion models that rely on image decomposition. Inspired by neural networks, a multi-layer CSR model is proposed, which involves five layers in a forward-feeding manner: two CSR layers acquiring sparse coefficient maps, one fusion layer combining sparse maps, and two reconstruction layers for image recovery. The dataset used in this paper comprises infrared and visible images selected from public dataset, as well as registered images collected by an actual Unmanned Aerial Vehicle (UAV). The source images contain ground targets, marine targets, and natural landscapes. To validate the effectiveness of the proposed image fusion model in this paper, comparative analysis is conducted with state-of-the-art (SOTA) algorithms. Experimental results demonstrate that the proposed fusion model outperforms other state-of-the-art methods by at least 10% in SF, EN, MI and QAB/F fusion metrics in most image fusion cases, thereby affirming its favorable performance.
- Published
- 2024
- Full Text
- View/download PDF