1. Advancing Video Quality Assessment for AIGC
- Author
-
Yue, Xinli, Sun, Jianhui, Kong, Han, Yao, Liangchao, Wang, Tianyi, Li, Lei, Rao, Fengyun, Lv, Jing, Xia, Fan, Deng, Yuetang, Wang, Qian, and Zhao, Lingchen
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In recent years, AI generative models have made remarkable progress across various domains, including text generation, image generation, and video generation. However, assessing the quality of text-to-video generation is still in its infancy, and existing evaluation frameworks fall short when compared to those for natural videos. Current video quality assessment (VQA) methods primarily focus on evaluating the overall quality of natural videos and fail to adequately account for the substantial quality discrepancies between frames in generated videos. To address this issue, we propose a novel loss function that combines mean absolute error with cross-entropy loss to mitigate inter-frame quality inconsistencies. Additionally, we introduce the innovative S2CNet technique to retain critical content, while leveraging adversarial training to enhance the model's generalization capabilities. Experimental results demonstrate that our method outperforms existing VQA techniques on the AIGC Video dataset, surpassing the previous state-of-the-art by 3.1% in terms of PLCC., Comment: 5 pages, 1 figure
- Published
- 2024