MAKE: A Combined Autoencoder to Detect Adversarial Examples

Zhaoxiang He,Zihan Yu,Liquan Chen,Zhongyuan Qin,Qunfang Zhang, Yipeng Zhang

2021 IEEE 6th International Conference on Signal and Image Processing (ICSIP)(2021)

引用 0|浏览0
暂无评分
摘要
With the continuous development and maturity of deep learning technologies, security issues in deep learning are also getting more and more attention. The generation of adversarial examples makes scholars more aware of this point. The addition of small disturbance in the original image can cause the misclassification of images by deep learning models, which seriously hinders the development and popularization of deep learning technology in the future. Therefore, a method based on MSE and KL AutoEncoder (MKAE) to detect adversarial examples is proposed. By using MSE and KL divergence together, it is proved that MKAE can resist various types of adversarial attacks. At the same time, compared with the existing feature squeezing and MagNet detection algorithms, the detection accuracy is improved. This method is not dependent on the specific attack mode, which is a movable detection and defense model.
更多
查看译文
关键词
Adversarial examples,autoencoder,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要