Back to Search Start Over

FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds

Authors :
Zhang, Yiming
Gu, Yicheng
Zeng, Yanhong
Xing, Zhening
Wang, Yuancheng
Wu, Zhizheng
Chen, Kai
Publication Year :
2024

Abstract

We study Neural Foley, the automatic generation of high-quality sound effects synchronizing with videos, enabling an immersive audio-visual experience. Despite its wide range of applications, existing approaches encounter limitations when it comes to simultaneously synthesizing high-quality and video-aligned (i.e.,, semantic relevant and temporal synchronized) sounds. To overcome these limitations, we propose FoleyCrafter, a novel framework that leverages a pre-trained text-to-audio model to ensure high-quality audio generation. FoleyCrafter comprises two key components: the semantic adapter for semantic alignment and the temporal controller for precise audio-video synchronization. The semantic adapter utilizes parallel cross-attention layers to condition audio generation on video features, producing realistic sound effects that are semantically relevant to the visual content. Meanwhile, the temporal controller incorporates an onset detector and a timestampbased adapter to achieve precise audio-video alignment. One notable advantage of FoleyCrafter is its compatibility with text prompts, enabling the use of text descriptions to achieve controllable and diverse video-to-audio generation according to user intents. We conduct extensive quantitative and qualitative experiments on standard benchmarks to verify the effectiveness of FoleyCrafter. Models and codes are available at https://github.com/open-mmlab/FoleyCrafter.<br />Comment: Project page: https://foleycrafter.github.io/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.01494
Document Type :
Working Paper