Back to Search Start Over

Learning Long Term Style Preserving Blind Video Temporal Consistency

Authors :
Matthieu Perrot
Julien Despois
Hugo Thimonier
Robin Kips
Kips, Robin
L'OREAL
Research & Innovation
Institut Polytechnique de Paris (IP Paris)
Département Images, Données, Signal (IDS)
Télécom ParisTech
Image, Modélisation, Analyse, GEométrie, Synthèse (IMAGES)
Laboratoire Traitement et Communication de l'Information (LTCI)
Institut Mines-Télécom [Paris] (IMT)-Télécom Paris-Institut Mines-Télécom [Paris] (IMT)-Télécom Paris
Source :
IEEE International Conference on Multimedia and Expo, IEEE International Conference on Multimedia and Expo, Jul 2021, Virtual, France
Publication Year :
2021
Publisher :
IEEE, 2021.

Abstract

International audience; When trying to independently apply image-trained algorithms to successive frames in videos, noxious flickering tends to appear. State-of-the-art post-processing techniques that aim at fostering temporal consistency, generate other temporal artifacts and visually alter the style of videos. We propose a postprocessing model, agnostic to the transformation applied to videos (eg style transfer, image manipulation using GANs, etc.), in the form of a recurrent neural network. Our model is trained using a Ping Pong procedure and its corresponding loss, recently introduced for GAN video generation, as well as a novel style preserving perceptual loss. The former improves long-term temporal consistency learning, while the latter fosters style preservation. We evaluate our model on the DAVIS and this http URL datasets and show that our approach offers state-of-the-art results concerning flicker removal, and better keeps the overall style of the videos than previous approaches.

Details

Database :
OpenAIRE
Journal :
2021 IEEE International Conference on Multimedia and Expo (ICME)
Accession number :
edsair.doi.dedup.....ee0ec408ddbab64f90bb45fc6cf6e1c1
Full Text :
https://doi.org/10.1109/icme51207.2021.9428445