Traffic sign attack via pinpoint region probability estimation network

PATTERN RECOGNITION(2024)

引用 0|浏览31
暂无评分
摘要
Recent work show that Deep Neural Networks (DNNs) have created great performance in many tasks, but they are vulnerable to adversarial examples which trigger Artificial Intelligence (AI) security risks. Especially in the autonomous driving field, attacking a traffic sign classification network results in a serious consequence. Most existing researches prefer to digital level attacks focusing on smaller or more imperceptible adversarial noise. Given that attacks available to real-world implementation usually emerge in more security-critical scenarios, we propose an adaptively adversarial example generation algorithm for physical attacks in the real-world setting. Taking account of the traffic sign classification, our approach is divided into two steps. The first step is to generate a probability map which precisely predicts the probability of being attacked for each pixel in the input image through the proposed Pinpoint Region Probability Estimation Network (PRPEN) and meanwhile, try to reduce the size of the highlighted area in the map. It can also be regarded as a classification problem in which every pixel has two classes, suitable for attacking or not, including the restrictions on the number of items in the suitable sort. The second one is to remake a mask depending on the probability map and optimize adversarial patches only on what mask decides. Experimental results show that our method achieves almost 100% misclassification rate in several widely used networks with even smaller patches. We also find how to effectively disguise as a target class to mislead the DNN classifiers and improve AI security.
更多
查看译文
关键词
Adversarial examples,Traffic sign attack,AI security,Neural networks,Probability estimation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要