Back to Search Start Over

Segmentation Guided Sparse Transformer for Under-Display Camera Image Restoration

Authors :
Xue, Jingyun
Wang, Tao
Wang, Jun
Zhang, Kaihao
Luo, Wenhan
Ren, Wenqi
Liu, Zikun
Park, Hyunhee
Cao, Xiaochun
Publication Year :
2024

Abstract

Under-Display Camera (UDC) is an emerging technology that achieves full-screen display via hiding the camera under the display panel. However, the current implementation of UDC causes serious degradation. The incident light required for camera imaging undergoes attenuation and diffraction when passing through the display panel, leading to various artifacts in UDC imaging. Presently, the prevailing UDC image restoration methods predominantly utilize convolutional neural network architectures, whereas Transformer-based methods have exhibited superior performance in the majority of image restoration tasks. This is attributed to the Transformer's capability to sample global features for the local reconstruction of images, thereby achieving high-quality image restoration. In this paper, we observe that when using the Vision Transformer for UDC degraded image restoration, the global attention samples a large amount of redundant information and noise. Furthermore, compared to the ordinary Transformer employing dense attention, the Transformer utilizing sparse attention can alleviate the adverse impact of redundant information and noise. Building upon this discovery, we propose a Segmentation Guided Sparse Transformer method (SGSFormer) for the task of restoring high-quality images from UDC degraded images. Specifically, we utilize sparse self-attention to filter out redundant information and noise, directing the model's attention to focus on the features more relevant to the degraded regions in need of reconstruction. Moreover, we integrate the instance segmentation map as prior information to guide the sparse self-attention in filtering and focusing on the correct regions.<br />Comment: 13 pages, 10 figures, conference or other essential info

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.05906
Document Type :
Working Paper