Back to Search Start Over

DeepHuman: 3D Human Reconstruction from a Single Image

Authors :
Zheng, Zerong
Yu, Tao
Wei, Yixuan
Dai, Qionghai
Liu, Yebin
Publication Year :
2019

Abstract

We propose DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image. To reduce the ambiguities associated with the surface geometry reconstruction, even for the reconstruction of invisible areas, we propose and leverage a dense semantic representation generated from SMPL model as an additional input. One key feature of our network is that it fuses different scales of image features into the 3D space through volumetric feature transformation, which helps to recover accurate surface geometry. The visible surface details are further refined through a normal refinement network, which can be concatenated with the volume generation network using our proposed volumetric normal projection layer. We also contribute THuman, a 3D real-world human model dataset containing about 7000 models. The network is trained using training data generated from the dataset. Overall, due to the specific design of our network and the diversity in our dataset, our method enables 3D human model estimation given only a single image and outperforms state-of-the-art approaches.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1903.06473
Document Type :
Working Paper