Back to Search Start Over

HACK: Learning a Parametric Head and Neck Model for High-fidelity Animation.

Authors :
Zhang, Longwen
Zhao, Zijun
Cong, Xinzhou
Zhang, Qixuan
Gu, Shuqi
Gao, Yuchong
Zheng, Rui
Yang, Wei
Xu, Lan
Yu, Jingyi
Source :
ACM Transactions on Graphics; Aug2023, Vol. 42 Issue 4, p1-20, 20p
Publication Year :
2023

Abstract

Significant advancements have been made in developing parametric models for digital humans, with various approaches concentrating on parts such as the human body, hand, or face. Nevertheless, connectors such as the neck have been overlooked in these models, with rich anatomical priors often unutilized. In this paper, we introduce HACK (Head-And-neCK), a novel parametric model for constructing the head and cervical region of digital humans. Our model seeks to disentangle the full spectrum of neck and larynx motions, facial expressions, and appearance variations, providing personalized and anatomically consistent controls, particularly for the neck regions. To build our HACK model, we acquire a comprehensive multi-modal dataset of the head and neck under various facial expressions. We employ a 3D ultrasound imaging scheme to extract the inner biomechanical structures, namely the precise 3D rotation information of the seven vertebrae of the cervical spine. We then adopt a multi-view photometric approach to capture the geometry and physically-based textures of diverse subjects, who exhibit a diverse range of static expressions as well as sequential head-and-neck movements. Using the multi-modal dataset, we train the parametric HACK model by separating the 3D head and neck depiction into various shape, pose, expression, and larynx blendshapes from the neutral expression and the rest skeletal pose. We adopt an anatomically-consistent skeletal design for the cervical region, and the expression is linked to facial action units for artist-friendly controls. We also propose to optimize the mapping from the identical shape space to the PCA spaces of personalized blendshapes to augment the pose and expression blendshapes, providing personalized properties within the framework of the generic model. Furthermore, we use larynx blendshapes to accurately control the larynx deformation and force the larynx slicing motions along the vertical direction in the UV-space for precise modeling of the larynx beneath the neck skin. HACK addresses the head and neck as a unified entity, offering more accurate and expressive controls, with a new level of realism, particularly for the neck regions. This approach has significant benefits for numerous applications, including geometric fitting and animation, and enables inter-correlation analysis between head and neck for fine-grained motion synthesis and transfer. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
07300301
Volume :
42
Issue :
4
Database :
Complementary Index
Journal :
ACM Transactions on Graphics
Publication Type :
Academic Journal
Accession number :
167303948
Full Text :
https://doi.org/10.1145/3592093