AdvShadow: Evading DeepFake Detection via Adversarial Shadow Attack

Jiatong Liu, Mingcheng Zhang, Jianpeng Ke,Lina Wang

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

引用 0|浏览2
暂无评分
摘要
With the emergence of techniques called DeepFakes, there has been a notable proliferation of DeepFake detectors rooted in deep learning. These detectors aim to expose subtle distinctions between genuine and counterfeit facial images across spatial, frequency, and physiological domains. Unfortunately, these detectors are susceptible to adversarial attacks. In this study, we introduce a novel transferable adversarial attack named AdvShadow, designed to attack DeepFake detectors by leveraging natural shadows in real-life. The proposed AdvShadow comprises three components: random shadow generator, shadow overlay network, and adversarial shadow generation. Initially, we construct a random shadowed facial dataset, utilizing additional shadow overlay network to produce adversarial samples for training. Then we generate adversarial shadows for DeepFake datasets, mitigating the disparities of luminance between real and synthesized images. Through extensive experiments, we demonstrate the effectiveness and transferability of AdvShadow for attacking under black-box settings.
更多
查看译文
关键词
DeepFakes,DeepFake detection,transferable adversarial shadows
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要