Adv-Eye: A Transfer-Based Natural Eye Makeup Attack on Face Recognition.

Jiatian Pi, Junyi Zeng, Quan Lu,Ning Jiang,Haiying Wu, Linchengxi Zeng,Zhiyou Wu

IEEE Access(2023)

引用 0|浏览4
暂无评分
摘要
Deep face recognition models are vulnerable to adversarial samples generated by adversarial attack methods. However, current attack methods do not adequately represent the security problems of the deep FR models, because they either produce adversarial samples which are unnatural and easily perceived by human or have poor attack capabilities with low attack success rates on the black-box victim FR model. To achieve a good trade-off between the imperceptibility and attack capability, we propose Adv-Eye, a novel method for constructing adversarial facial images by adding natural eyeshadow to the orbital region. Adv-Eye consists of Makeup Generation Module, Makeup Blending Module, and Attack Module. Makeup Generation Module develops pre-makeup strategy to help adversarial generative networks (GANs) to accurately generate eyeshadow on the orbital image. Makeup Blending Module develops a multi-view image visual similarity evaluation method to improve the imperceptibility of the generated eyeshadow. In Attack Module, an ensemble attack strategy based on fine-grained meta-learning and input decay, is applied to improve attack capability under query-free black-box setting. From the experimental results under LADN dataset and MT dataset, compared with existing techniques, the adversarial samples generated by Adv-Eye not only significantly improve the visual quality, but also achieve average success rates of 1.63% and 1.05% improvement on the local black-box FR model and average confidence point improvements of 5.33 and 5.22 on the online commercial FR platform respectively. The above results demonstrate that pre-makeup strategy and multi-view image visual similarity evaluation method effectively improve the imperceptibility of generated adversarial perturbations, and Attack Module effectively improves attack success rate while maintaining high image quality.
更多
查看译文
关键词
Face recognition, Perturbation methods, Visualization, Task analysis, Metalearning, Closed box, Training, Generative adversarial networks, Adversarial attack, generative adversarial networks, meta-learning, face recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要