Modeling Sparse Spatio-Temporal Representations For No-Reference Video Quality Assessment

2017 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2017)(2017)

引用 4|浏览1
暂无评分
摘要
We present a novel No-Reference (NR) video quality assessment (VQA) algorithm that operates on the sparse representation coefficients of local spatio-temporal (video) volumes. Our work is motivated by the observation that the primary visual cortex adopts a sparse coding strategy to represent visual stimulus. We use the popular K-SVD algorithm to construct spatio-temporal dictionaries to sparsely represent local spatio-temporal volumes of natural videos. We empirically demonstrate that the histogram of the sparse representations corresponding to each atom in the dictionary can be well modelled using a Generalised Gaussian Distribution (GGD). We then show that the GGD model parameters are good feature for distortion estimation. This, in turn leads us to the proposed NR-VQA algorithm. The GGD model parameters corresponding to each atom of the dictionary form the feature vector that is used to predict quality using Support Vector Regression (SVR). The proposed algorithm delivers competitive performance over the LIVE VQA (SD), EPFL (SD) and the LIVE Mobile high definition (HD) databases. Our algorithm is called SParsity based Objective VIdeo Quality Evaluator (SPOVIQE). The proposed algorithm is simple and computationally efficient as compared with other state-of-the-art NR-VQA algorithms.
更多
查看译文
关键词
Sparse representation,spatio-temporal volumes,No-Reference video quality assessment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要