No Classifier Left Behind: An In-depth Study of the RBF SVM Classifier's Vulnerability to Image Extraction Attacks via Confidence Information Exploitation

2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)(2020)

引用 1|浏览0
暂无评分
摘要
Training image extraction attacks attempt to reverse engineer training images from an already trained machine learning model. Such attacks are concerning because training data can often be sensitive in nature. Recent research has shown that extracting training images is generally much harder than model inversion, which attempts to duplicate the functionality of the model. In this paper, we correct common misperceptions about image extraction attacks and develop a deep understanding of why some trained models are vulnerable to our attack while others are not. In particular, we use the RBFSVM classifier to show that we can extract individual training images from models trained on thousands of images., which refutes the notion that these attacks can only extract an “average” of each class. We also show that increasing diversity of the training data set leads to more successful attacks. To the best of our knowledge, our work is the first to show semantically meaningful images extracted from the RBF SVM classifier.
更多
查看译文
关键词
Privacy,Machine Learning,Extraction Attacks,Black-box attack,Cybersecurity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要