Back to Search Start Over

MotionChain: Conversational Motion Controllers via Multimodal Prompts

Authors :
Jiang, Biao
Chen, Xin
Zhang, Chi
Yin, Fukun
Li, Zhuoyuan
YU, Gang
Fan, Jiayuan
Publication Year :
2024

Abstract

Recent advancements in language models have demonstrated their adeptness in conducting multi-turn dialogues and retaining conversational context. However, this proficiency remains largely unexplored in other multimodal generative models, particularly in human motion models. By integrating multi-turn conversations in controlling continuous virtual human movements, generative human motion models can achieve an intuitive and step-by-step process of human task execution for humanoid robotics, game agents, or other embodied systems. In this work, we present MotionChain, a conversational human motion controller to generate continuous and long-term human motion through multimodal prompts. Specifically, MotionChain consists of multi-modal tokenizers that transform various data types such as text, image, and motion, into discrete tokens, coupled with a Vision-Motion-aware Language model. By leveraging large-scale language, vision-language, and vision-motion data to assist motion-related generation tasks, MotionChain thus comprehends each instruction in multi-turn conversation and generates human motions followed by these prompts. Extensive experiments validate the efficacy of MotionChain, demonstrating state-of-the-art performance in conversational motion generation, as well as more intuitive manners of controlling and interacting with virtual humans.<br />Comment: 14 pages, 4 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.01700
Document Type :
Working Paper