Back to Search Start Over

Text Prompting for Multi-Concept Video Customization by Autoregressive Generation

Authors :
Kothandaraman, Divya
Sohn, Kihyuk
Villegas, Ruben
Voigtlaender, Paul
Manocha, Dinesh
Babaeizadeh, Mohammad
Publication Year :
2024

Abstract

We present a method for multi-concept customization of pretrained text-to-video (T2V) models. Intuitively, the multi-concept customized video can be derived from the (non-linear) intersection of the video manifolds of the individual concepts, which is not straightforward to find. We hypothesize that sequential and controlled walking towards the intersection of the video manifolds, directed by text prompting, leads to the solution. To do so, we generate the various concepts and their corresponding interactions, sequentially, in an autoregressive manner. Our method can generate videos of multiple custom concepts (subjects, action and background) such as a teddy bear running towards a brown teapot, a dog playing violin and a teddy bear swimming in the ocean. We quantitatively evaluate our method using videoCLIP and DINO scores, in addition to human evaluation. Videos for results presented in this paper can be found at https://github.com/divyakraman/MultiConceptVideo2024.<br />Comment: Paper accepted to AI4CC Workshop at CVPR 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.13951
Document Type :
Working Paper