Adversarial attacks in computer vision: a survey

Journal of Membrane Computing(2024)

引用 0|浏览4
暂无评分
摘要
Deep learning, as an important topic of artificial intelligence, has been widely applied in various fields, especially in computer vision applications, such as image classification and object detection, which have made remarkable advancements. However, it has been demonstrated that deep neural networks (DNNs) suffer from adversarial vulnerability. For the image classification task, the carefully crafted perturbations are added to the clean images, and the resulting adversarial examples are able to change the prediction results of DNNs. Hence, the presence of adversarial examples presents a significant obstacle to the security of DNNs in practical applications, which has garnered considerable attention from researchers in related fields. Recently, a number of studies have been conducted on adversarial attacks. In this survey, the relevant concepts and background are first introduced. Then, based on computer vision tasks, we systematically review the existing adversarial attack methods and research progress. Finally, several common defense methods are summarized, and some challenges are discussed.
更多
查看译文
关键词
Deep learning,Computer vision,Adversarial attacks,Adversarial examples
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要