Back to Search Start Over

FuseGAN: Learning to Fuse Multi-Focus Image via Conditional Generative Adversarial Network.

Authors :
Guo, Xiaopeng
Nie, Rencan
Cao, Jinde
Zhou, Dongming
Mei, Liye
He, Kangjian
Source :
IEEE Transactions on Multimedia; Aug2019, Vol. 21 Issue 8, p1982-1996, 15p
Publication Year :
2019

Abstract

We study the problem of multi-focus image fusion, where the key challenge is detecting the focused regions accurately among multiple partially focused source images. Inspired by the conditional generative adversarial network (cGAN) to image-to-image task, we propose a novel FuseGAN to fulfill the images-to-image for multi-focus image fusion. To satisfy the requirement of dual input-to-one output, the encoder of the generator in FuseGAN is designed as a Siamese network. The least square GAN objective is employed to enhance the training stability of FuseGAN, resulting in an accurate confidence map for focus region detection. Also, we exploit the convolutional conditional random fields technique on the confidence map to reach a refined final decision map for better focus region detection. Moreover, due to the lack of a large-scale standard dataset, we synthesize a large enough multi-focus image dataset based on a public natural image dataset PASCAL VOC 2012, where we utilize a normalized disk point spread function to simulate the defocus and separate the background and foreground in the synthesis for each image. We conduct extensive experiments on two public datasets to verify the effectiveness of the proposed method. Results demonstrate that the proposed method presents accurate decision maps for focus regions in multi-focus images, such that the fused images are superior to 11 recent state-of-the-art algorithms, not only in visual perception, but also in quantitative analysis in terms of five metrics. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15209210
Volume :
21
Issue :
8
Database :
Complementary Index
Journal :
IEEE Transactions on Multimedia
Publication Type :
Academic Journal
Accession number :
137644871
Full Text :
https://doi.org/10.1109/TMM.2019.2895292