Deep Neural Networks For Full-Reference And No-Reference Audio-Visual Quality Assessment.

ICIP(2021)

引用 7|浏览17
暂无评分
摘要
In the field of audio and visual quality assessment, most of previous works only focused on the single-mode visual or audio signal. However, for multi-mode signals, such as video and the accompanying audio, the overall perceptual quality depends on both video and audio. In this paper, we proposed an objective audio-visual quality assessment (AVQA) architecture for multi-mode signals based on deep neural networks. We first use a pretrained convolutional neural network to extract features of the single video frames and the concurrent short audio segments. Then, the extracted features are fed into Gated Recurrent Unit networks for time sequence modeling. Finally, we utilize the fully connected layers to fuse the qualities of audio and visual signals into the final quality score. The proposed architecture can be applied to both fullreference and no-reference AVQA. Experimental results on the LIVE-SJTU Database prove that our model outperforms the state-of-the-art AVQA methods.
更多
查看译文
关键词
Audio-visual quality assessment,convolutional neural network,multimodal fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要