Back to Search Start Over

Explicit3D: Graph network with spatial inference for single image 3D object detection.

Authors :
Liu, Yanjun
Yang, Wenming
Source :
Signal Processing: Image Communication. May2024, Vol. 124, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Indoor 3D object detection is an essential task in single image scene understanding, impacting spatial cognition fundamentally in visual reasoning. Existing works on 3D object detection from a single image either pursue this goal through independent predictions of each object or implicitly reason over all possible objects, failing to harness relational geometric information between objects. To address this problem, we propose a sparse graph-based pipeline named Explicit3D based on object geometry and semantics features. Taking the efficiency into consideration, we further define a relatedness score and design a novel dynamic pruning method via group sampling for sparse scene graph generation and updating. Furthermore, our Explicit3D introduces homogeneous matrices and defines new relative loss and corner loss to model the spatial difference between target pairs explicitly. Instead of using ground-truth labels as direct supervision, our relative and corner loss are derived from homogeneous transforms, which renders the model to learn the geometric consistency between objects. The experimental results on the SUN RGB-D dataset demonstrate that our Explicit3D achieves better performance balance than the-state-of-the-art. • We propose a sparse graph-based pipeline for single image 3D indoor scene detection. • A novel relatedness score guides dynamic pruning for sparse scene graph generation. • Homogeneous matrices are used to explicitly represent relative spatial relationships. • A novel relative loss boosts spatial offset prediction and overall object detection. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09235965
Volume :
124
Database :
Academic Search Index
Journal :
Signal Processing: Image Communication
Publication Type :
Academic Journal
Accession number :
176630827
Full Text :
https://doi.org/10.1016/j.image.2024.117120