Back to Search Start Over

V-LASIK: Consistent Glasses-Removal from Videos Using Synthetic Data

Authors :
Shalev-Arkushin, Rotem
Azulay, Aharon
Halperin, Tavi
Richardson, Eitan
Bermano, Amit H.
Fried, Ohad
Publication Year :
2024

Abstract

Diffusion-based generative models have recently shown remarkable image and video editing capabilities. However, local video editing, particularly removal of small attributes like glasses, remains a challenge. Existing methods either alter the videos excessively, generate unrealistic artifacts, or fail to perform the requested edit consistently throughout the video. In this work, we focus on consistent and identity-preserving removal of glasses in videos, using it as a case study for consistent local attribute removal in videos. Due to the lack of paired data, we adopt a weakly supervised approach and generate synthetic imperfect data, using an adjusted pretrained diffusion model. We show that despite data imperfection, by learning from our generated data and leveraging the prior of pretrained diffusion models, our model is able to perform the desired edit consistently while preserving the original video content. Furthermore, we exemplify the generalization ability of our method to other local video editing tasks by applying it successfully to facial sticker-removal. Our approach demonstrates significant improvement over existing methods, showcasing the potential of leveraging synthetic data and strong video priors for local video editing tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.14510
Document Type :
Working Paper