Back to Search Start Over

Mesh deformation-based single-view 3D reconstruction of thin eyeglasses frames with differentiable rendering

Authors :
Zhang, Fan
Ji, Ziyue
Kang, Weiguang
Li, Weiqing
Su, Zhiyong
Source :
Graphical Models, Volume 135, October 2024, 101225
Publication Year :
2024

Abstract

With the support of Virtual Reality (VR) and Augmented Reality (AR) technologies, the 3D virtual eyeglasses try-on application is well on its way to becoming a new trending solution that offers a "try on" option to select the perfect pair of eyeglasses at the comfort of your own home. Reconstructing eyeglasses frames from a single image with traditional depth and image-based methods is extremely difficult due to their unique characteristics such as lack of sufficient texture features, thin elements, and severe self-occlusions. In this paper, we propose the first mesh deformation-based reconstruction framework for recovering high-precision 3D full-frame eyeglasses models from a single RGB image, leveraging prior and domain-specific knowledge. Specifically, based on the construction of a synthetic eyeglasses frame dataset, we first define a class-specific eyeglasses frame template with pre-defined keypoints. Then, given an input eyeglasses frame image with thin structure and few texture features, we design a keypoint detector and refiner to detect predefined keypoints in a coarse-to-fine manner to estimate the camera pose accurately. After that, using differentiable rendering, we propose a novel optimization approach for producing correct geometry by progressively performing free-form deformation (FFD) on the template mesh. We define a series of loss functions to enforce consistency between the rendered result and the corresponding RGB input, utilizing constraints from inherent structure, silhouettes, keypoints, per-pixel shading information, and so on. Experimental results on both the synthetic dataset and real images demonstrate the effectiveness of the proposed algorithm.

Details

Database :
arXiv
Journal :
Graphical Models, Volume 135, October 2024, 101225
Publication Type :
Report
Accession number :
edsarx.2408.05402
Document Type :
Working Paper
Full Text :
https://doi.org/10.1016/j.gmod.2024.101225