Back to Search
Start Over
Multi-modality Deep Restoration of Extremely Compressed Face Videos
- Publication Year :
- 2021
-
Abstract
- Arguably the most common and salient object in daily video communications is the talking head, as encountered in social media, virtual classrooms, teleconferences, news broadcasting, talk shows, etc. When communication bandwidth is limited by network congestions or cost effectiveness, compression artifacts in talking head videos are inevitable. The resulting video quality degradation is highly visible and objectionable due to high acuity of human visual system to faces. To solve this problem, we develop a multi-modality deep convolutional neural network method for restoring face videos that are aggressively compressed. The main innovation is a new DCNN architecture that incorporates known priors of multiple modalities: the video-synchronized speech signal and semantic elements of the compression code stream, including motion vectors, code partition map and quantization parameters. These priors strongly correlate with the latent video and hence they are able to enhance the capability of deep learning to remove compression artifacts. Ample empirical evidences are presented to validate the superior performance of the proposed DCNN method on face videos over the existing state-of-the-art methods.<br />Comment: Accepted by TPAMI. Extension of DAVD-Net in CVPR 2020
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2107.05548
- Document Type :
- Working Paper