Back to Search Start Over

Representation-Agnostic Shape Fields

Authors :
Huang, Xiaoyang
Yang, Jiancheng
Wang, Yanjun
Chen, Ziyu
Li, Linguo
Li, Teng
Ni, Bingbing
Zhang, Wenjun
Source :
published in the Tenth International Conference on Learning Representations (ICLR 2022)
Publication Year :
2022

Abstract

3D shape analysis has been widely explored in the era of deep learning. Numerous models have been developed for various 3D data representation formats, e.g., MeshCNN for meshes, PointNet for point clouds and VoxNet for voxels. In this study, we present Representation-Agnostic Shape Fields (RASF), a generalizable and computation-efficient shape embedding module for 3D deep learning. RASF is implemented with a learnable 3D grid with multiple channels to store local geometry. Based on RASF, shape embeddings for various 3D shape representations (point clouds, meshes and voxels) are retrieved by coordinate indexing. While there are multiple ways to optimize the learnable parameters of RASF, we provide two effective schemes among all in this paper for RASF pre-training: shape reconstruction and normal estimation. Once trained, RASF becomes a plug-and-play performance booster with negligible cost. Extensive experiments on diverse 3D representation formats, networks and applications, validate the universal effectiveness of the proposed RASF. Code and pre-trained models are publicly available https://github.com/seanywang0408/RASF<br />Comment: The Tenth International Conference on Learning Representations (ICLR 2022). Code is available at https://github.com/seanywang0408/RASF

Details

Database :
arXiv
Journal :
published in the Tenth International Conference on Learning Representations (ICLR 2022)
Publication Type :
Report
Accession number :
edsarx.2203.10259
Document Type :
Working Paper