Back to Search Start Over

Breathing Life Into Sketches Using Text-to-Video Priors

Authors :
Gal, Rinon
Vinker, Yael
Alaluf, Yuval
Bermano, Amit H.
Cohen-Or, Daniel
Shamir, Ariel
Chechik, Gal
Publication Year :
2023

Abstract

A sketch is one of the most intuitive and versatile tools humans use to convey their ideas visually. An animated sketch opens another dimension to the expression of ideas and is widely used by designers for a variety of purposes. Animating sketches is a laborious process, requiring extensive experience and professional design skills. In this work, we present a method that automatically adds motion to a single-subject sketch (hence, "breathing life into it"), merely by providing a text prompt indicating the desired motion. The output is a short animation provided in vector representation, which can be easily edited. Our method does not require extensive training, but instead leverages the motion prior of a large pretrained text-to-video diffusion model using a score-distillation loss to guide the placement of strokes. To promote natural and smooth motion and to better preserve the sketch's appearance, we model the learned motion through two components. The first governs small local deformations and the second controls global affine transformations. Surprisingly, we find that even models that struggle to generate sketch videos on their own can still serve as a useful backbone for animating abstract representations.<br />Comment: Project page: https://livesketch.github.io/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.13608
Document Type :
Working Paper