Back to Search Start Over

Beyond Appearance: a Semantic Controllable Self-Supervised Learning Framework for Human-Centric Visual Tasks

Authors :
Chen, Weihua
Xu, Xianzhe
Jia, Jian
luo, Hao
Wang, Yaohua
Wang, Fan
Jin, Rong
Sun, Xiuyu
Publication Year :
2023
Publisher :
arXiv, 2023.

Abstract

Human-centric visual tasks have attracted increasing research attention due to their widespread applications. In this paper, we aim to learn a general human representation from massive unlabeled human images which can benefit downstream human-centric tasks to the maximum extent. We call this method SOLIDER, a Semantic cOntrollable seLf-supervIseD lEaRning framework. Unlike the existing self-supervised learning methods, prior knowledge from human images is utilized in SOLIDER to build pseudo semantic labels and import more semantic information into the learned representation. Meanwhile, we note that different downstream tasks always require different ratios of semantic information and appearance information. For example, human parsing requires more semantic information, while person re-identification needs more appearance information for identification purpose. So a single learned representation cannot fit for all requirements. To solve this problem, SOLIDER introduces a conditional network with a semantic controller. After the model is trained, users can send values to the controller to produce representations with different ratios of semantic information, which can fit different needs of downstream tasks. Finally, SOLIDER is verified on six downstream human-centric visual tasks. It outperforms state of the arts and builds new baselines for these tasks. The code is released in https://github.com/tinyvision/SOLIDER.<br />Comment: accepted by CVPR2023

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....fa37c0aac2e6ab3cbf4f68a9de6f6a4a
Full Text :
https://doi.org/10.48550/arxiv.2303.17602