Adversarial Label Flips Attack to Decrease the Security of SVM Against Evasion Attack

Zhuohuang Chen,Zhimin He, Maijie Deng,Yan Zhou,Haozhen Situ

2023 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR)(2023)

引用 0|浏览0
暂无评分
摘要
Pattern recognition has been employed in many security-sensitive applications, e.g., malware detection and intrusion detection, to detect malicious samples. One of the most popular adversarial attacks is evasion attack, which deliberately modifies malicious samples in order to evade the detection of a deployed pattern recognition system at test time. However, for the malicious samples far away from the decision boundary of a classifier, a successful evasion attack is of high cost, or even impossible. In this paper, we propose a label flips attack to mislead the learning process of a classifier, aiming to decrease the security of a classifier against evasion attack, i.e., making the evasion attack much easier. The attack samples are greedily selected to minimize the security of a classifier, which is measured by the expected least cost of successful evasion attacks for the malicious samples. We conducted experiments on spam filtering and mal ware detection and showed that the proposed label flips attack greatly decreased the security of support vector machines against evasion attack. An attacker can not only conduct a successful evasion attack with less cost, but also achieve higher success rate within the given capability. In order to speed up the proposed label flips attack, we devised a pre-processing technique for selecting the candidate training samples based on a kernel density estimator (KDE). This paper discovers a potential threat to machine learning algorithms, which is of great important for developing a secure system.
更多
查看译文
关键词
Adversarial learning,Label flips attack,Evasion attack,Causative attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要