Back to Search Start Over

Animated 3D human avatars from a single image with GAN-based texture inference.

Authors :
Li, Zhong
Chen, Lele
Liu, Celong
Zhang, Fuyao
Li, Zekun
Gao, Yu
Ha, Yuanzhou
Xu, Chenliang
Quan, Shuxue
Xu, Yi
Source :
Computers & Graphics. Apr2021, Vol. 95, p81-91. 11p.
Publication Year :
2021

Abstract

• Computer graphics. • Computer vision. • Deep learning. • 3D reconstruction. • Virtual human digitization. • Generative adversarial network. [Display omitted] With the development of AR/VR technologies, a reliable and straightforward way to digitize a three-dimensional human body is in high demand. Most existing methods use complex equipment and sophisticated algorithms, but this is impractical for everyday users. In this paper, we propose a pipeline that reconstructs a 3D human shape avatar from a single image. Our approach simultaneously reconstructs the three-dimensional human geometry and whole body texture map with only a single RGB image as input. We first segment the human body parts from the image and then obtain an initial body geometry by fitting the segment to a parametric model. Next, we warp the initial geometry to the final shape by utilizing a silhouette-based dense correspondence. Finally, to infer invisible back texture from a frontal image, we propose a network called InferGAN. Based on human semantic information, we also propose a method to handle partial occlusion by reconstructing the occluded body parts separately. Comprehensive experiments demonstrate that our solution is robust and effective on both public and our own datasets. Our human avatars can be easily rigged and animated using MoCap data. We have developed a mobile application that demonstrates this capability for AR applications. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00978493
Volume :
95
Database :
Academic Search Index
Journal :
Computers & Graphics
Publication Type :
Academic Journal
Accession number :
149510654
Full Text :
https://doi.org/10.1016/j.cag.2021.01.002