Adversarial Magnification to Deceive Deepfake Detection through Super Resolution
arxiv(2024)
摘要
Deepfake technology is rapidly advancing, posing significant challenges to
the detection of manipulated media content. Parallel to that, some adversarial
attack techniques have been developed to fool the deepfake detectors and make
deepfakes even more difficult to be detected. This paper explores the
application of super resolution techniques as a possible adversarial attack in
deepfake detection. Through our experiments, we demonstrate that minimal
changes made by these methods in the visual appearance of images can have a
profound impact on the performance of deepfake detection systems. We propose a
novel attack using super resolution as a quick, black-box and effective method
to camouflage fake images and/or generate false alarms on pristine images. Our
results indicate that the usage of super resolution can significantly impair
the accuracy of deepfake detectors, thereby highlighting the vulnerability of
such systems to adversarial attacks. The code to reproduce our experiments is
available at:
https://github.com/davide-coccomini/Adversarial-Magnification-to-Deceive-Deepfake-Detection-through-Super-Resolution
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要