Back to Search Start Over

S2P3: Self-Supervised Polarimetric Pose Prediction

Authors :
Ruhkamp, Patrick
Gao, Daoyi
Navab, Nassir
Busam, Benjamin
Publication Year :
2023

Abstract

This paper proposes the first self-supervised 6D object pose prediction from multimodal RGB+polarimetric images. The novel training paradigm comprises 1) a physical model to extract geometric information of polarized light, 2) a teacher-student knowledge distillation scheme and 3) a self-supervised loss formulation through differentiable rendering and an invertible physical constraint. Both networks leverage the physical properties of polarized light to learn robust geometric representations by encoding shape priors and polarization characteristics derived from our physical model. Geometric pseudo-labels from the teacher support the student network without the need for annotated real data. Dense appearance and geometric information of objects are obtained through a differentiable renderer with the predicted pose for self-supervised direct coupling. The student network additionally features our proposed invertible formulation of the physical shape priors that enables end-to-end self-supervised training through physical constraints of derived polarization characteristics compared against polarimetric input images. We specifically focus on photometrically challenging objects with texture-less or reflective surfaces and transparent materials for which the most prominent performance gain is reported.<br />Comment: Accepted at IJCV

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.01105
Document Type :
Working Paper