Light-VQA+: A Video Quality Assessment Model for Exposure Correction with Vision-Language Guidance
arxiv(2024)
摘要
Recently, User-Generated Content (UGC) videos have gained popularity in our
daily lives. However, UGC videos often suffer from poor exposure due to the
limitations of photographic equipment and techniques. Therefore, Video Exposure
Correction (VEC) algorithms have been proposed, Low-Light Video Enhancement
(LLVE) and Over-Exposed Video Recovery (OEVR) included. Equally important to
the VEC is the Video Quality Assessment (VQA). Unfortunately, almost all
existing VQA models are built generally, measuring the quality of a video from
a comprehensive perspective. As a result, Light-VQA, trained on LLVE-QA, is
proposed for assessing LLVE. We extend the work of Light-VQA by expanding the
LLVE-QA dataset into Video Exposure Correction Quality Assessment (VEC-QA)
dataset with over-exposed videos and their corresponding corrected versions. In
addition, we propose Light-VQA+, a VQA model specialized in assessing VEC.
Light-VQA+ differs from Light-VQA mainly from the usage of the CLIP model and
the vision-language guidance during the feature extraction, followed by a new
module referring to the Human Visual System (HVS) for more accurate assessment.
Extensive experimental results show that our model achieves the best
performance against the current State-Of-The-Art (SOTA) VQA models on the
VEC-QA dataset and other public datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要