Back to Search Start Over

Self-Supervised Image-to-Point Distillation via Semantically Tolerant Contrastive Loss

Authors :
Mahmoud, Anas
Hu, Jordan S. K.
Kuai, Tianshu
Harakeh, Ali
Paull, Liam
Waslander, Steven L.
Publication Year :
2023

Abstract

An effective framework for learning 3D representations for perception tasks is distilling rich self-supervised image features via contrastive learning. However, image-to point representation learning for autonomous driving datasets faces two main challenges: 1) the abundance of self-similarity, which results in the contrastive losses pushing away semantically similar point and image regions and thus disturbing the local semantic structure of the learned representations, and 2) severe class imbalance as pretraining gets dominated by over-represented classes. We propose to alleviate the self-similarity problem through a novel semantically tolerant image-to-point contrastive loss that takes into consideration the semantic distance between positive and negative image regions to minimize contrasting semantically similar point and image regions. Additionally, we address class imbalance by designing a class-agnostic balanced loss that approximates the degree of class imbalance through an aggregate sample-to-samples semantic similarity measure. We demonstrate that our semantically-tolerant contrastive loss with class balancing improves state-of-the art 2D-to-3D representation learning in all evaluation settings on 3D semantic segmentation. Our method consistently outperforms state-of-the-art 2D-to-3D representation learning frameworks across a wide range of 2D self-supervised pretrained models.<br />Comment: Accepted in CVPR 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2301.05709
Document Type :
Working Paper