Deep Shape-Texture Statistics for Completely Blind Image Quality Evaluation
CoRR(2024)
摘要
Opinion-Unaware Blind Image Quality Assessment (OU-BIQA) models aim to
predict image quality without training on reference images and subjective
quality scores. Thereinto, image statistical comparison is a classic paradigm,
while the performance is limited by the representation ability of visual
descriptors. Deep features as visual descriptors have advanced IQA in recent
research, but they are discovered to be highly texture-biased and lack of
shape-bias. On this basis, we find out that image shape and texture cues
respond differently towards distortions, and the absence of either one results
in an incomplete image representation. Therefore, to formulate a well-round
statistical description for images, we utilize the shapebiased and
texture-biased deep features produced by Deep Neural Networks (DNNs)
simultaneously. More specifically, we design a Shape-Texture Adaptive Fusion
(STAF) module to merge shape and texture information, based on which we
formulate qualityrelevant image statistics. The perceptual quality is
quantified by the variant Mahalanobis Distance between the inner and outer
Shape-Texture Statistics (DSTS), wherein the inner and outer statistics
respectively describe the quality fingerprints of the distorted image and
natural images. The proposed DSTS delicately utilizes shape-texture statistical
relations between different data scales in the deep domain, and achieves
state-of-the-art (SOTA) quality prediction performance on images with
artificial and authentic distortions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要