Back to Search Start Over

Residual Contrastive Learning for Image Reconstruction: Learning Transferable Representations from Noisy Images

Authors :
Dong, Nanqing
Maggioni, Matteo
Yang, Yongxin
Pérez-Pellitero, Eduardo
Leonardis, Ales
McDonagh, Steven
Publication Year :
2021
Publisher :
arXiv, 2021.

Abstract

This paper is concerned with contrastive learning (CL) for low-level image restoration and enhancement tasks. We propose a new label-efficient learning paradigm based on residuals, residual contrastive learning (RCL), and derive an unsupervised visual representation learning framework, suitable for low-level vision tasks with noisy inputs. While supervised image reconstruction aims to minimize residual terms directly, RCL alternatively builds a connection between residuals and CL by defining a novel instance discrimination pretext task, using residuals as the discriminative feature. Our formulation mitigates the severe task misalignment between instance discrimination pretext tasks and downstream image reconstruction tasks, present in existing CL frameworks. Experimentally, we find that RCL can learn robust and transferable representations that improve the performance of various downstream tasks, such as denoising and super resolution, in comparison with recent self-supervised methods designed specifically for noisy inputs. Additionally, our unsupervised pre-training can significantly reduce annotation costs whilst maintaining performance competitive with fully-supervised image reconstruction.<br />Comment: Accepted by IJCAI 2022

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....1ce76f4ea6681017c870e8c9251108f5
Full Text :
https://doi.org/10.48550/arxiv.2106.10070