LandmarkBreaker: A proactive method to obstruct DeepFakes via disrupting facial landmark extraction

COMPUTER VISION AND IMAGE UNDERSTANDING(2024)

Cited 0|Views28
No score
Abstract
The recent development of Deep Neural Networks (DNN) has significantly increased the realism of AIsynthesized faces, with the most notable examples being the DeepFakes. In particular, DeepFake can synthesize the face of the target subject from the face of another subject, while retaining the same face attributes. With the increased number of social media portals, DeepFake videos rapidly spread through the Internet, causing a broad negative impact on society. Recent countermeasures to combat DeepFake focus on detection, a passive defense that is not able to prevent or slow down the generation of DeepFakes. Therefore in this paper, we focus on proactive defense and describe a new method named LandmarkBreaker, which is the first dedicated solution to obstruct the generation of DeepFake videos by disrupting facial landmark extraction, inspired by the observation that facial landmark extraction is an indispensable step for face alignment required in DeepFake synthesis. To disrupt facial landmark extraction, we design adversarial perturbations meticulously by optimizing a loss function in an iterative manner. Furthermore, we develop LandmarkBreaker++, which can further reduce the perceptibility of adversarial perturbations using a gradient clipping and face masking strategy. We validate our method on three state-of-the-art facial landmark extractors and investigate the defense performance on a recent Celeb-DF dataset, which demonstrates the efficacy of our method in obstructing the generation of DeepFake videos.
More
Translated text
Key words
Deepfake defense,Facial landmark extraction,DeepFake obstruction,DNN security
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined