Transformations as Denoising: A Robust Approach to Weaken Adversarial Facial Images

2022 IEEE INTERNATIONAL CONFERENCE ON NETWORKING, ARCHITECTURE AND STORAGE (NAS)(2022)

引用 1|浏览14
暂无评分
摘要
While facial recognition (FR) has been widely used by businesses and governments for various purposes, it gives rise to privacy concerns once the consent of users is not handled properly. Hence, researchers have proposed methods to evade FR technology by attaching adversarial perturbations to user profile images. Nonetheless, image denoising-based methods have been proposed to increase the model robustness over adversarial examples. This paper investigates the impact of transformations on adversarial facial images. In particular, a simple but effective framework, TaD (Transformations as Denoising), is proposed to remove possible adversarial perturbations from user images generated by popular FR privacy protection frameworks. Extensive evaluations show the reliability of Fawkes and LowKey with various simple transformations. Experimental results indicate that simple transformations can impact the protection performance, and the choice of DNN-based facial feature extractors can enhance the robustness of facial images with adversarial perturbations. The experimental results also demonstrate strengths and weaknesses of FR methods and give suggestions for further improvements of privacy safeguard tools.
更多
查看译文
关键词
Facial recognition,privacy abuse,facial image protection robustness,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要