1. Facial Prior Based First Order Motion Model for Micro-expression Generation
- Author
-
Zhang, Yi, Zhao, Youjun, Wen, Yuhang, Tang, Zixuan, Xu, Xinhua, Liu, Mengyuan, Zhang, Yi, Zhao, Youjun, Wen, Yuhang, Tang, Zixuan, Xu, Xinhua, and Liu, Mengyuan
- Abstract
Spotting facial micro-expression from videos finds various potential applications in fields including clinical diagnosis and interrogation, meanwhile this task is still difficult due to the limited scale of training data. To solve this problem, this paper tries to formulate a new task called micro-expression generation and then presents a strong baseline which combines the first order motion model with facial prior knowledge. Given a target face, we intend to drive the face to generate micro-expression videos according to the motion patterns of source videos. Specifically, our new model involves three modules. First, we extract facial prior features from a region focusing module. Second, we estimate facial motion using key points and local affine transformations with a motion prediction module. Third, expression generation module is used to drive the target face to generate videos. We train our model on public CASME II, SAMM and SMIC datasets and then use the model to generate new micro-expression videos for evaluation. Our model achieves the first place in the Facial Micro-Expression Challenge 2021 (MEGC2021), where our superior performance is verified by three experts with Facial Action Coding System certification. Source code is provided in https://github.com/Necolizer/Facial-Prior-Based-FOMM., Comment: ACM Multimedia 2021
- Published
- 2023