1. A Music-Driven Deep Generative Adversarial Model for Guzheng Playing Animation
- Author
-
Fan Changjie, Ding Yu, Zhimeng Zhang, Zhao Zeng, Gongzheng Li, Zhigang Deng, and Chen Jiali
- Subjects
Computer science ,business.industry ,Generalization ,Process (engineering) ,Deep learning ,Musical instrument ,Animation ,Computer Graphics and Computer-Aided Design ,Motion capture ,Motion (physics) ,Human–computer interaction ,Signal Processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Generative grammar - Abstract
To date relatively few efforts have been made on the automatic generation of musical instrument playing animations. This problem is challenging due to the intrinsically complex, temporal relationship between music and human motion as well as the lacking of high quality music-playing motion datasets. In this paper, we propose a fully automatic, deep learning based framework to synthesize realistic upper body animations based on novel guzheng music input. Specifically, based on a recorded audiovisual motion capture dataset, we delicately design a generative adversarial network (GAN) based approach to capture the temporal relationship between the music and the human motion data. In this process, data augmentation is employed to improve the generalization of our approach to handle a variety of guzheng music inputs. Through extensive objective and subjective experiments, we show that our method can generate visually plausible guzheng-playing animations that are well synchronized with the input guzheng music, and it can significantly outperform \uline{the state-of-the-art} methods. In addition, through an ablation study, we validate the contributions of the carefully-designed modules in our framework.
- Published
- 2023
- Full Text
- View/download PDF