Back to Search Start Over

Pixel2Mesh: 3D Mesh Model Generation via Image Guided Deformation.

Authors :
Wang, Nanyang
Zhang, Yinda
Li, Zhuwen
Fu, Yanwei
Yu, Hang
Liu, Wei
Xue, Xiangyang
Jiang, Yu-Gang
Source :
IEEE Transactions on Pattern Analysis & Machine Intelligence; Oct2021, Vol. 43 Issue 10, p3600-3613, 14p
Publication Year :
2021

Abstract

In this paper, we propose an end-to-end deep learning architecture that generates 3D triangular meshes from single color images. Restricted by the nature of prevalent deep learning techniques, the majority of previous works represent 3D shapes in volumes or point clouds. However, it is non-trivial to convert these representations to compact and ready-to-use mesh models. Unlike the existing methods, our network represents 3D shapes in meshes, which are essentially graphs and well suited for graph-based convolutional neural networks. Leveraging perceptual features extracted from an input image, our network produces the correct geometry by progressively deforming an ellipsoid. To make the whole deformation procedure stable, we adopt a coarse-to-fine strategy, and define various mesh/surface related losses to capture properties of various aspects, which benefits producing the visually appealing and physically accurate 3D geometry. In addition, our model by nature can be adapted to objects in specific domains, e.g., human faces, and be easily extended to learn per-vertex properties, e.g., color. Extensive experiments show that our method not only qualitatively produces the mesh model with better details, but also achieves the higher 3D shape estimation accuracy compared against the state-of-the-arts. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01628828
Volume :
43
Issue :
10
Database :
Complementary Index
Journal :
IEEE Transactions on Pattern Analysis & Machine Intelligence
Publication Type :
Academic Journal
Accession number :
153376790
Full Text :
https://doi.org/10.1109/TPAMI.2020.2984232