Back to Search Start Over

Efficient Text-driven Motion Generation via Latent Consistency Training

Authors :
Hu, Mengxian
Zhu, Minghao
Zhou, Xun
Yan, Qingqing
Li, Shu
Liu, Chengju
Chen, Qijun
Publication Year :
2024

Abstract

Motion diffusion models excel at text-driven motion generation but struggle with real-time inference since motion sequences are time-axis redundant and solving reverse diffusion trajectory involves tens or hundreds of sequential iterations. In this paper, we propose a Motion Latent Consistency Training (MLCT) framework, which allows for large-scale skip sampling of compact motion latent representation by constraining the consistency of the outputs of adjacent perturbed states on the precomputed trajectory. In particular, we design a flexible motion autoencoder with quantization constraints to guarantee the low-dimensionality, succinctness, and boundednes of the motion embedding space. We further present a conditionally guided consistency training framework based on conditional trajectory simulation without additional pre-training diffusion model, which significantly improves the conditional generation performance with minimal training cost. Experiments on two benchmarks demonstrate our model's state-of-the-art performance with an 80\% inference cost saving and around 14 ms on a single RTX 4090 GPU.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.02791
Document Type :
Working Paper