1. Generating Multiple 4D Expression Transitions by Learning Face Landmark Trajectories.
- Author
-
Otberdout, Naima, Ferrari, Claudio, Daoudi, Mohamed, Berretti, Stefano, and Bimbo, Alberto Del
- Abstract
In this article, we address the problem of 4D facial expressions generation. This is usually addressed by animating a neutral 3D face to reach an expression peak, and then get back to the neutral state. In the real world though, people show more complex expressions, and switch from one expression to another. We thus propose a new model that generates transitions between different expressions, and synthesizes long and composed 4D expressions. This involves three sub-problems: (1) modeling the temporal dynamics of expressions, (2) learning transitions between them, and (3) deforming a generic mesh. We propose to encode the temporal evolution of expressions using the motion of a set of 3D landmarks, that we learn to generate by training a manifold-valued GAN (Motion3DGAN). To allow the generation of composed expressions, this model accepts two labels encoding the starting and the ending expressions. The final sequence of meshes is generated by a Sparse2Dense mesh Decoder (S2D-Dec) that maps the landmark displacements to a dense, per-vertex displacement of a known mesh topology. By explicitly working with motion trajectories, the model is totally independent from the identity. Extensive experiments on five public datasets show that our proposed approach brings significant improvements with respect to previous solutions, while retaining good generalization to unseen data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF