RetouchUAA: Unconstrained Adversarial Attack via Image Retouching
CoRR(2023)
摘要
Deep Neural Networks (DNNs) are susceptible to adversarial examples.
Conventional attacks generate controlled noise-like perturbations that fail to
reflect real-world scenarios and hard to interpretable. In contrast, recent
unconstrained attacks mimic natural image transformations occurring in the real
world for perceptible but inconspicuous attacks, yet compromise realism due to
neglect of image post-processing and uncontrolled attack direction. In this
paper, we propose RetouchUAA, an unconstrained attack that exploits a real-life
perturbation: image retouching styles, highlighting its potential threat to
DNNs. Compared to existing attacks, RetouchUAA offers several notable
advantages. Firstly, RetouchUAA excels in generating interpretable and
realistic perturbations through two key designs: the image retouching attack
framework and the retouching style guidance module. The former custom-designed
human-interpretability retouching framework for adversarial attack by
linearizing images while modelling the local processing and retouching
decision-making in human retouching behaviour, provides an explicit and
reasonable pipeline for understanding the robustness of DNNs against
retouching. The latter guides the adversarial image towards standard retouching
styles, thereby ensuring its realism. Secondly, attributed to the design of the
retouching decision regularization and the persistent attack strategy,
RetouchUAA also exhibits outstanding attack capability and defense robustness,
posing a heavy threat to DNNs. Experiments on ImageNet and Place365 reveal that
RetouchUAA achieves nearly 100\% white-box attack success against three DNNs,
while achieving a better trade-off between image naturalness, transferability
and defense robustness than baseline attacks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要