Back to Search
Start Over
High-Fidelity Monocular Face Reconstruction Based on an Unsupervised Model-Based Face Autoencoder.
- Source :
- IEEE Transactions on Pattern Analysis & Machine Intelligence; Feb2020, Vol. 42 Issue 2, p357-370, 14p
- Publication Year :
- 2020
-
Abstract
- In this work, we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is the differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance, and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world datasets feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation. This work is an extended version of , where we additionally present a stochastic vertex sampling technique for faster training of our networks, and moreover, we propose and evaluate analysis-by-synthesis and shape-from-shading refinement approaches to achieve a high-fidelity reconstruction. [ABSTRACT FROM AUTHOR]
- Subjects :
- SAMPLING (Process)
IMAGE reconstruction
Subjects
Details
- Language :
- English
- ISSN :
- 01628828
- Volume :
- 42
- Issue :
- 2
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Pattern Analysis & Machine Intelligence
- Publication Type :
- Academic Journal
- Accession number :
- 141230579
- Full Text :
- https://doi.org/10.1109/TPAMI.2018.2876842