Back to Search Start Over

Neural Groundplans: Persistent Neural Scene Representations from a Single Image

Authors :
Sharma, Prafull
Tewari, Ayush
Du, Yilun
Zakharov, Sergey
Ambrus, Rares
Gaidon, Adrien
Freeman, William T.
Durand, Fredo
Tenenbaum, Joshua B.
Sitzmann, Vincent
Sharma, Prafull
Tewari, Ayush
Du, Yilun
Zakharov, Sergey
Ambrus, Rares
Gaidon, Adrien
Freeman, William T.
Durand, Fredo
Tenenbaum, Joshua B.
Sitzmann, Vincent
Publication Year :
2022

Abstract

We present a method to map 2D image observations of a scene to a persistent 3D scene representation, enabling novel view synthesis and disentangled representation of the movable and immovable components of the scene. Motivated by the bird's-eye-view (BEV) representation commonly used in vision and robotics, we propose conditional neural groundplans, ground-aligned 2D feature grids, as persistent and memory-efficient scene representations. Our method is trained self-supervised from unlabeled multi-view observations using differentiable rendering, and learns to complete geometry and appearance of occluded regions. In addition, we show that we can leverage multi-view videos at training time to learn to separately reconstruct static and movable components of the scene from a single image at test time. The ability to separately reconstruct movable objects enables a variety of downstream tasks using simple heuristics, such as extraction of object-centric 3D representations, novel view synthesis, instance-level segmentation, 3D bounding box prediction, and scene editing. This highlights the value of neural groundplans as a backbone for efficient 3D scene understanding models.<br />Comment: Project page: https://prafullsharma.net/neural_groundplans

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1381555956
Document Type :
Electronic Resource