Back to Search Start Over

Dual-Attention-Guided Network for Ghost-Free High Dynamic Range Imaging.

Authors :
Yan, Qingsen
Gong, Dong
Shi, Javen Qinfeng
van den Hengel, Anton
Shen, Chunhua
Reid, Ian
Zhang, Yanning
Source :
International Journal of Computer Vision. Jan2022, Vol. 130 Issue 1, p76-94. 19p.
Publication Year :
2022

Abstract

Ghosting artifacts caused by moving objects and misalignments are a key challenge in constructing high dynamic range (HDR) images. Current methods first register the input low dynamic range (LDR) images using optical flow before merging them. This process is error-prone, and often causes ghosting in the resulting merged image. We propose a novel dual-attention-guided end-to-end deep neural network, called DAHDRNet, which produces high-quality ghost-free HDR images. Unlike previous methods that directly stack the LDR images or features for merging, we use dual-attention modules to guide the merging according to the reference image. DAHDRNet thus exploits both spatial attention and feature channel attention to achieve ghost-free merging. The spatial attention modules automatically suppress undesired components caused by misalignments and saturation, and enhance the fine details in the non-reference images. The channel attention modules adaptively rescale channel-wise features by considering the inter-dependencies between channels. The dual-attention approach is applied recurrently to further improve feature representation, and thus alignment. A dilated residual dense block is devised to make full use of the hierarchical features and increase the receptive field when hallucinating missing details. We employ a hybrid loss function, which consists of a perceptual loss, a total variation loss, and a content loss to recover photo-realistic images. Although DAHDRNet is not flow-based, it can be applied to flow-based registration to reduce artifacts caused by optical-flow estimation errors. Experiments on different datasets show that the proposed DAHDRNet achieves state-of-the-art quantitative and qualitative results. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09205691
Volume :
130
Issue :
1
Database :
Academic Search Index
Journal :
International Journal of Computer Vision
Publication Type :
Academic Journal
Accession number :
154536067
Full Text :
https://doi.org/10.1007/s11263-021-01535-y