Improving the invisibility of adversarial examples with perceptually adaptive perturbation.

Inf. Sci.(2023)

引用 3|浏览6
暂无评分
摘要
Deep neural networks (DNNs) are vulnerable to adversarial examples generated by adding subtle perturbations to benign inputs. While these perturbations are somewhat small due to the Lp norm constraint, they are still easily spotted by human eyes. This paper proposes Perceptual Sensitive Attack (PS Attack) to address this flaw with a perceptually adaptive scheme. We add Just Noticeable Difference (JND) as prior information into adversarial attacks, making image changes in areas that are insensitive to the human eyes. By integrating the JND matrix into the Lp norm, PS Attack projects perturbations onto the JND space around clean data, resulting in more imperceivable adversarial perturbations. PS Attack also mitigates the trade-off between the imperceptibility and transferability of adversarial images by adjusting a visual coefficient. Extensive experiments manifest that combining PS attacks with state-of-the-art black-box approaches can significantly promote the naturalness of adversarial examples while maintaining their attack ability. Compared to the state-of-the-art transferable attacks, our attacks reduce LPIPS by 8% on average when attacking typically-trained and defense models.
更多
查看译文
关键词
Adversarial examples,Just noticeable difference,Deep neural networks,Image classification,Perceptually adaptive
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要