Focus by Prior: Deepfake Detection Based on Prior-Attention

2022 IEEE International Conference on Multimedia and Expo (ICME)(2022)

引用 0|浏览11
暂无评分
摘要
Nowadays advanced facial manipulation techniques produce deepfake videos more realistically, which makes deepfake detection more difficult. To capture subtle and intricate artifacts, recent works attempt to enhance low-level textural information by attention-based framework. However, these methods require complex simulated data or extra supervision. Highly dependent on training settings, these methods not only have high training costs but also are prone to overfitting. To address this issue, we propose a novel perspective of deepfake detection via so-called prior-attention. Specifically, we introduce prior textural information, such as edge and noise, to model the attention maps explicitly. Benefiting from these natural “attention maps”, our model significantly enhances discriminative information without additional supervision. Furthermore, we design a Feature Abstraction Block (FAB) to facilitate cross-layer features interaction and insert it into distinct layers of CNN to detect the inconsistencies at multiple spatial levels. Extensive experiments demonstrate that our method achieves performance comparable to state-of-the-art methods.
更多
查看译文
关键词
Deepfake Detection,Face Forensics,Attention Mechanism
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要