Back to Search Start Over

Soundini: Sound-Guided Diffusion for Natural Video Editing

Authors :
Lee, Seung Hyun
Kim, Sieun
Yoo, Innfarn
Yang, Feng
Cho, Donghyeon
Kim, Youngseo
Chang, Huiwen
Kim, Jinkyu
Kim, Sangpil
Publication Year :
2023

Abstract

We propose a method for adding sound-guided visual effects to specific regions of videos with a zero-shot setting. Animating the appearance of the visual effect is challenging because each frame of the edited video should have visual changes while maintaining temporal consistency. Moreover, existing video editing solutions focus on temporal consistency across frames, ignoring the visual style variations over time, e.g., thunderstorm, wave, fire crackling. To overcome this limitation, we utilize temporal sound features for the dynamic style. Specifically, we guide denoising diffusion probabilistic models with an audio latent representation in the audio-visual latent space. To the best of our knowledge, our work is the first to explore sound-guided natural video editing from various sound sources with sound-specialized properties, such as intensity, timbre, and volume. Additionally, we design optical flow-based guidance to generate temporally consistent video frames, capturing the pixel-wise relationship between adjacent frames. Experimental results show that our method outperforms existing video editing techniques, producing more realistic visual effects that reflect the properties of sound. Please visit our page: https://kuai-lab.github.io/soundini-gallery/.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2304.06818
Document Type :
Working Paper