Back to Search
Start Over
Multi-Type Self-Attention-Based Convolutional-Neural-Network Post-Filtering for AV1 Codec.
- Source :
- Mathematics (2227-7390); Sep2024, Vol. 12 Issue 18, p2874, 24p
- Publication Year :
- 2024
-
Abstract
- Over the past few years, there has been substantial interest and research activity surrounding the application of Convolutional Neural Networks (CNNs) for post-filtering in video coding. Most current research efforts have focused on using CNNs with various kernel sizes for post-filtering, primarily concentrating on High-Efficiency Video Coding/H.265 (HEVC) and Versatile Video Coding/H.266 (VVC). This narrow focus has limited the exploration and application of these techniques to other video coding standards such as AV1, developed by the Alliance for Open Media, which offers excellent compression efficiency, reducing bandwidth usage and improving video quality, making it highly attractive for modern streaming and media applications. This paper introduces a novel approach that extends beyond traditional CNN methods by integrating three different self-attention layers into the CNN framework. Applied to the AV1 codec, the proposed method significantly improves video quality by incorporating these distinct self-attention layers. This enhancement demonstrates the potential of self-attention mechanisms to revolutionize post-filtering techniques in video coding beyond the limitations of convolution-based methods. The experimental results show that the proposed network achieves an average BD-rate reduction of 10.40% for the Luma component and 19.22% and 16.52% for the Chroma components compared to the AV1 anchor. Visual quality assessments further validated the effectiveness of our approach, showcasing substantial artifact reduction and detail enhancement in videos. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 22277390
- Volume :
- 12
- Issue :
- 18
- Database :
- Complementary Index
- Journal :
- Mathematics (2227-7390)
- Publication Type :
- Academic Journal
- Accession number :
- 180013549
- Full Text :
- https://doi.org/10.3390/math12182874