Back to Search
Start Over
Learning 3-D Face Shape From Diverse Sources With Cross-Domain Face Synthesis
- Source :
- IEEE Multimedia; January 2023, Vol. 30 Issue: 1 p7-16, 10p
- Publication Year :
- 2023
-
Abstract
- Monocular face reconstruction is a significant task in many multimedia applications. However, learning-based methods unequivocally suffer from the lack of large datasets annotated with 3-D ground truth. To tackle this problem, we proposed a novel end-to-end 3-D face reconstruction network consisting of a domain-transfer conditional GAN (cGAN) and a face reconstruction network. Our method first uses cGAN to translate the realistic face images to the specific rendered style, with a novel 2-D facial edge consistency loss function to exploit in-the-wild images. The domain-transferred images are then fed into a 3-D face reconstruction network. We further propose a novel reprojection consistency loss to restrict the 3-D face reconstruction network in a self-supervised way. Our approach can be trained with the annotated dataset, synthetic dataset, and in-the-wild images to learn a unified face model. Extensive experiments have demonstrated the effectiveness of our method.
Details
- Language :
- English
- ISSN :
- 1070986X
- Volume :
- 30
- Issue :
- 1
- Database :
- Supplemental Index
- Journal :
- IEEE Multimedia
- Publication Type :
- Periodical
- Accession number :
- ejs63069237
- Full Text :
- https://doi.org/10.1109/MMUL.2022.3195091