Efficient Context and Saliency Aware Transformer Network for No-Reference Image Quality Assessment.

2023 IEEE International Conference on Visual Communications and Image Processing (VCIP)(2023)

引用 0|浏览0
暂无评分
摘要
No-Reference Image Quality Assessment (NR-IQA) aims to estimate the perceptual image quality without access to reference images. To deal with it effectively and efficiently, in this work we propose a Context and Saliency aware Transformer Network (CSTNet), which is built based on a lightweight pyramid Vision Transformer (ViT). Specifically, a Multi-scale Context Aware Refinement (MCAR) block is devised to fully leverage hierarchical context features extracted by the ViT backbone. Further, saliency map prediction is incorporated as a sub-task to simulate the human attention on salient regions when perceiving images. Extensive experiments on public image quality datasets demonstrate its efficiency and superiority compared to the state-of-the-art models.
更多
查看译文
关键词
image quality assessment,transformer,saliency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要