Back to Search Start Over

Permutation-Invariant Relational Network for Multi-person 3D Pose Estimation

Authors :
Ugrinovic, Nicolas
Ruiz, Adria
Agudo, Antonio
Sanfeliu, Alberto
Moreno-Noguer, Francesc
Publication Year :
2022
Publisher :
arXiv, 2022.

Abstract

The recovery of multi-person 3D poses from a single RGB image is a severely ill-conditioned problem due to the inherent 2D-3D depth ambiguity, inter-person occlusions, and body truncations. To tackle these issues, recent works have shown promising results by simultaneously reasoning for different people. However, in most cases this is done by only considering pairwise person interactions, hindering thus a holistic scene representation able to capture long-range interactions. This is addressed by approaches that jointly process all people in the scene, although they require defining one of the individuals as a reference and a pre-defined person ordering, being sensitive to this choice. In this paper, we overcome both these limitations, and we propose an approach for multi-person 3D pose estimation that captures long-range interactions independently of the input order. For this purpose, we build a residual-like permutation-invariant network that successfully refines potentially corrupted initial 3D poses estimated by an off-the-shelf detector. The residual function is learned via Set Transformer blocks, that model the interactions among all initial poses, no matter their ordering or number. A thorough evaluation demonstrates that our approach is able to boost the performance of the initially estimated 3D poses by large margins, achieving state-of-the-art results on standardized benchmarks. Additionally, the proposed module works in a computationally efficient manner and can be potentially used as a drop-in complement for any 3D pose detector in multi-people scenes.

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....3f5b19a548467085931db831be00d328
Full Text :
https://doi.org/10.48550/arxiv.2204.04913