Assured Deep Learning: Practical Defense Against Adversarial Attacks

2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)(2018)

引用 1|浏览46
暂无评分
摘要
Deep Learning (DL) models have been shown to be vulnerable to adversarial attacks. In light of the adversarial attacks, it is critical to reliably quantify the confidence of the prediction in a neural network to enable safe adoption of DL models in autonomous sensitive tasks (e.g., unmanned vehicles and drones). This article discusses recent research advances for unsupervised model assurance against the strongest adversarial attacks known to date and quantitatively compare their performance. Given the widespread usage of DL models, it is imperative to provide model assurance by carefully looking into the feature maps automatically learned within D1 models instead of looking back with regret when deep learning systems are compromised by adversaries.
更多
查看译文
关键词
Adversarial Deep Learning,Unsupervised Model Assurance,Real-time Defense,Reconfigurable Computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要