Back to Search Start Over

Light-VQA+: A Video Quality Assessment Model for Exposure Correction with Vision-Language Guidance

Authors :
Zhou, Xunchu
Liu, Xiaohong
Dong, Yunlong
Kou, Tengchuan
Gao, Yixuan
Zhang, Zicheng
Li, Chunyi
Wu, Haoning
Zhai, Guangtao
Publication Year :
2024

Abstract

Recently, User-Generated Content (UGC) videos have gained popularity in our daily lives. However, UGC videos often suffer from poor exposure due to the limitations of photographic equipment and techniques. Therefore, Video Exposure Correction (VEC) algorithms have been proposed, Low-Light Video Enhancement (LLVE) and Over-Exposed Video Recovery (OEVR) included. Equally important to the VEC is the Video Quality Assessment (VQA). Unfortunately, almost all existing VQA models are built generally, measuring the quality of a video from a comprehensive perspective. As a result, Light-VQA, trained on LLVE-QA, is proposed for assessing LLVE. We extend the work of Light-VQA by expanding the LLVE-QA dataset into Video Exposure Correction Quality Assessment (VEC-QA) dataset with over-exposed videos and their corresponding corrected versions. In addition, we propose Light-VQA+, a VQA model specialized in assessing VEC. Light-VQA+ differs from Light-VQA mainly from the usage of the CLIP model and the vision-language guidance during the feature extraction, followed by a new module referring to the Human Visual System (HVS) for more accurate assessment. Extensive experimental results show that our model achieves the best performance against the current State-Of-The-Art (SOTA) VQA models on the VEC-QA dataset and other public datasets.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.03333
Document Type :
Working Paper