Back to Search Start Over

Dual-Discriminator Generative Adversarial Network with Uniform Color Information Extraction for Color Constancy.

Authors :
Huiting Xu
Zhenshan Tan
Zhijiang Li
Shuying Lyu
Source :
Journal of Imaging Science & Technology; Mar/Apr2024, Vol. 68 Issue 2, p1-11, 11p
Publication Year :
2024

Abstract

Generative adversarial network (GAN) has attracted extensive attention in color constancy because it allows pixel-wise supervision. However, the misinterpretation of color features and the low sensitivity of the discriminator caused by strong correlation of multiple features limit the learning capability of GAN. To address these issues, we propose a dual-discriminator generative adversarial network (DDGAN), which includes a color feature learning (CFL) module, a feature fusion discriminator (FFD) module and a global consistency constraint (GCC) module. First, CFL pays attention to regions with uniform color to enable the generator to learn distinguishable color information. Second, FFD is a discriminator module that contains two feature extraction branches; one extracts color features and the other extracts globally correlated features. These features are then fused to weaken structural features and enhance the discriminator's sensitivity to color features. Finally, GCC imposes global consistency constraints to reconsider the structural features weakened by FFD and unify structural features and color features, aiming to obtain more uniform images of colors and contents. Extensive experiments on the ColorChecker RECommended dataset, NUS 8-Camera and Cube datasets show that our DDGAN outperforms other GAN-based methods in terms of five popular metrics. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10623701
Volume :
68
Issue :
2
Database :
Complementary Index
Journal :
Journal of Imaging Science & Technology
Publication Type :
Academic Journal
Accession number :
176967868
Full Text :
https://doi.org/10.2352/J.ImagingSci.Technol.2024.68.2.020401