Back to Search Start Over

Self-Supervised Visual Representation Learning with Semantic Grouping

Authors :
Wen, Xin
Zhao, Bingchen
Zheng, Anlin
Zhang, Xiangyu
Qi, Xiaojuan
Wen, Xin
Zhao, Bingchen
Zheng, Anlin
Zhang, Xiangyu
Qi, Xiaojuan
Publication Year :
2022

Abstract

In this paper, we tackle the problem of learning visual representations from unlabeled scene-centric data. Existing works have demonstrated the potential of utilizing the underlying complex structure within scene-centric data; still, they commonly rely on hand-crafted objectness priors or specialized pretext tasks to build a learning framework, which may harm generalizability. Instead, we propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning. The semantic grouping is performed by assigning pixels to a set of learnable prototypes, which can adapt to each sample by attentive pooling over the feature and form new slots. Based on the learned data-dependent slots, a contrastive objective is employed for representation learning, which enhances the discriminability of features, and conversely facilitates grouping semantically coherent pixels together. Compared with previous efforts, by simultaneously optimizing the two coupled objectives of semantic grouping and contrastive learning, our approach bypasses the disadvantages of hand-crafted priors and is able to learn object/group-level representations from scene-centric images. Experiments show our approach effectively decomposes complex scenes into semantic groups for feature learning and significantly benefits downstream tasks, including object detection, instance segmentation, and semantic segmentation. Code is available at: https://github.com/CVMI-Lab/SlotCon.<br />Comment: Accepted at NeurIPS 2022

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1333774916
Document Type :
Electronic Resource