Evaluating the Vulnerability of Deep Learning-based Image Quality Assessment Methods to Adversarial Attacks

2023 11th European Workshop on Visual Information Processing (EUVIP)(2023)

引用 0|浏览4
暂无评分
摘要
Recent studies have discovered that Deep Learning (DL) models are vulnerable to adversarial attacks in image classification tasks. While most studies have focused on DL models for image classification, only a few works have addressed this issue in the context of Image Quality Assessment (IQA). This paper investigates the robustness of different Convolutional Neural Network (CNN) models against adversarial attacks when used for an IQA task. We propose an adaptation of state-of-the-art image classification attacks in both targeted and untargeted modes for an IQA regression task. We also analyze the correlation between the perturbation’s visibility and the attack’s success. Our experimental results show that DL-based IQA methods are vulnerable to such attacks, with a significant decrease in correlation scores. Consequently, the development of countermeasures against such attacks is essential for improving the reliability and accuracy of DL-based IQA models. To support the principle of reproducible research and fair comparison, we make the codes publicly available on https://github.com/hbrachemi/IQA_AttacksSurvey.
更多
查看译文
关键词
Blind image quality assessment,Adversarial attacks,Robustness,Deep learning,Convolutional neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要