Back to Search Start Over

Neural Video Representation for Redundancy Reduction and Consistency Preservation

Authors :
Hayami, Taiga
Shindo, Takahiro
Akamatsu, Shunsuke
Watanabe, Hiroshi
Publication Year :
2024

Abstract

Implicit neural representation (INR) embed various signals into neural networks. They have gained attention in recent years because of their versatility in handling diverse signal types. In the context of video, INR achieves video compression by embedding video signals directly into networks and compressing them. Conventional methods either use an index that expresses the time of the frame or features extracted from individual frames as network inputs. The latter method provides greater expressive capability as the input is specific to each video. However, the features extracted from frames often contain redundancy, which contradicts the purpose of video compression. Additionally, such redundancies make it challenging to accurately reconstruct high-frequency components in the frames. To address these problems, we focus on separating the high-frequency and low-frequency components of the reconstructed frame. We propose a video representation method that generates both the high-frequency and low-frequency components of the frame, using features extracted from the high-frequency components and temporal information, respectively. Experimental results demonstrate that our method outperforms the existing HNeRV method, achieving superior results in 96 percent of the videos.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.18497
Document Type :
Working Paper