Chrome Extension
WeChat Mini Program
Use on ChatGLM

Protecting by attacking: A personal information protecting method with cross-modal adversarial examples

Mengnan Zhao, Bo Wang, Weikuo Guo, Wei Wang

Neurocomputing(2023)

Cited 0|Views9
No score
Abstract
Recent years' development of AI technology brings more convenience to our life while at the same time increasing the risk of personal information leakage. In this work, we try to protect personal information contained in the images by generating adversarial examples to fool the image captioning models. The generated adversarial examples are user-oriented which means the users can manipulate or hide sensitive information on the text output as they wish. By doing so, our personal information can be well protected from image captioning models. To fulfill the task, we adopt five kinds of adversarial attack. Experimental results show our method can successfully protect user security. The Pytorch & REG; implementations can be downloaded from an open-source GitHub project (https://github.com/Dlut-lab-zmn/ImageCaptioning-Attack/). & COPY; 2023 Elsevier B.V. All rights reserved.
More
Translated text
Key words
Security,Cross-modal,Image captioning,Adversarial attacks
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined