1. Learning Long Term Style Preserving Blind Video Temporal Consistency
- Author
-
Matthieu Perrot, Julien Despois, Hugo Thimonier, Robin Kips, Kips, Robin, L'OREAL, Research & Innovation, Institut Polytechnique de Paris (IP Paris), Département Images, Données, Signal (IDS), Télécom ParisTech, Image, Modélisation, Analyse, GEométrie, Synthèse (IMAGES), Laboratoire Traitement et Communication de l'Information (LTCI), and Institut Mines-Télécom [Paris] (IMT)-Télécom Paris-Institut Mines-Télécom [Paris] (IMT)-Télécom Paris
- Subjects
FOS: Computer and information sciences ,Video post-processing ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Speech recognition ,Flicker ,Deep learning ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video processing ,[INFO] Computer Science [cs] ,Visualization ,Term (time) ,Machine Learning ,Computer Vision and Image Processing ,Recurrent neural network ,Time consistency ,[INFO]Computer Science [cs] ,Artificial intelligence ,business - Abstract
International audience; When trying to independently apply image-trained algorithms to successive frames in videos, noxious flickering tends to appear. State-of-the-art post-processing techniques that aim at fostering temporal consistency, generate other temporal artifacts and visually alter the style of videos. We propose a postprocessing model, agnostic to the transformation applied to videos (eg style transfer, image manipulation using GANs, etc.), in the form of a recurrent neural network. Our model is trained using a Ping Pong procedure and its corresponding loss, recently introduced for GAN video generation, as well as a novel style preserving perceptual loss. The former improves long-term temporal consistency learning, while the latter fosters style preservation. We evaluate our model on the DAVIS and this http URL datasets and show that our approach offers state-of-the-art results concerning flicker removal, and better keeps the overall style of the videos than previous approaches.
- Published
- 2021
- Full Text
- View/download PDF