Back to Search
Start Over
MFTCFNet: infrared and visible image fusion network based on multi-layer feature tightly coupled.
- Source :
- Signal, Image & Video Processing; Nov2024, Vol. 18 Issue 11, p8217-8228, 12p
- Publication Year :
- 2024
-
Abstract
- To address the problems of target edge blur and feature loss when fusing infrared and visible images, a novel image fusion network based on multi-layer feature tightly coupled, called MFTCFNet, is proposed. Owing to the difficulty in feature extraction caused by different imaging mechanisms in infrared and visible images, a multi-scale deep feature extraction module has been designed, which consists of designed deformable convolutional-balanced attention mechanism and gradient residual encoder block. Among them, the deformable convolutional-balanced attention mechanism is mainly used to solve the problem of target edge blur caused by single scale features, while the gradient residual encoder block can effectively reduce energy loss in the feature extraction process. In order to fully preserve the feature information of different scales in original images, we construct a multi-layer feature tightly coupled structure. By skillfully utilizing the cross-transmission characteristics of dual branch networks, the problem of feature loss caused by ignoring the correlation between original images can be effectively solved. Extensive experiments showed that the fusion results of the proposed network have more prominent objectives, clearer scene information and better visual effects. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 18631703
- Volume :
- 18
- Issue :
- 11
- Database :
- Complementary Index
- Journal :
- Signal, Image & Video Processing
- Publication Type :
- Academic Journal
- Accession number :
- 179636378
- Full Text :
- https://doi.org/10.1007/s11760-024-03464-y