1. Advancing infrared and visible image fusion with an enhanced multiscale encoder and attention-based networks
- Author
-
Jiashuo Wang, Yong Chen, Xiaoyun Sun, Hui Xing, Fan Zhang, Shiji Song, and Shuyong Yu
- Subjects
Applied sciences ,Engineering ,Science - Abstract
Summary: Infrared and visible image fusion aims to produce images that highlight key targets and offer distinct textures, by merging the thermal radiation infrared images with the detailed texture visible images. Traditional auto encoder-decoder-based fusion methods often rely on manually designed fusion strategies, which lack flexibility across different scenarios. Addressing this limitation, we introduce EMAFusion, a fusion approach featuring an enhanced multiscale encoder and a learnable, lightweight fusion network. Our method incorporates skip connections, the convolutional block attention module (CBAM), and nest architecture within the auto encoder-decoder framework to adeptly extract and preserve multiscale features for fusion tasks. Furthermore, a fusion network driven by spatial and channel attention mechanisms is proposed, designed to precisely capture and integrate essential features from both image types. Comprehensive evaluations of the TNO image fusion dataset affirm the proposed method’s superiority over existing state-of-the-art techniques, demonstrating its potential for advancing infrared and visible image fusion.
- Published
- 2024
- Full Text
- View/download PDF