Back to Search Start Over

Variational Relational Point Completion Network

Authors :
Pan, Liang
Chen, Xinyi
Cai, Zhongang
Zhang, Junzhe
Zhao, Haiyu
Yi, Shuai
Liu, Ziwei
Publication Year :
2021

Abstract

Real-scanned point clouds are often incomplete due to viewpoint, occlusion, and noise. Existing point cloud completion methods tend to generate global shape skeletons and hence lack fine local details. Furthermore, they mostly learn a deterministic partial-to-complete mapping, but overlook structural relations in man-made objects. To tackle these challenges, this paper proposes a variational framework, Variational Relational point Completion network (VRCNet) with two appealing properties: 1) Probabilistic Modeling. In particular, we propose a dual-path architecture to enable principled probabilistic modeling across partial and complete clouds. One path consumes complete point clouds for reconstruction by learning a point VAE. The other path generates complete shapes for partial point clouds, whose embedded distribution is guided by distribution obtained from the reconstruction path during training. 2) Relational Enhancement. Specifically, we carefully design point self-attention kernel and point selective kernel module to exploit relational point features, which refines local shape details conditioned on the coarse completion. In addition, we contribute a multi-view partial point cloud dataset (MVP dataset) containing over 100,000 high-quality scans, which renders partial 3D shapes from 26 uniformly distributed camera poses for each 3D CAD model. Extensive experiments demonstrate that VRCNet outperforms state-of-theart methods on all standard point cloud completion benchmarks. Notably, VRCNet shows great generalizability and robustness on real-world point cloud scans.<br />Comment: 15 pages, 13 figures, accepted to CVPR 2021 (Oral), project webpage: https://paul007pl.github.io/projects/VRCNet.html

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2104.10154
Document Type :
Working Paper