Limitations of the Lipschitz constant as a defense against adversarial examples.

Nemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML(2018)

引用 82|浏览58
暂无评分
摘要
Several recent papers have discussed utilizing Lipschitz constants to limit the susceptibility of neural networks to adversarial examples. We analyze recently proposed methods for computing the Lipschitz constant. We show that the Lipschitz constant may indeed enable adversarially robust neural networks. However, the methods currently employed for computing it suffer from theoretical and practical limitations. We argue that addressing this shortcoming is a promising direction for future research into certified adversarial defenses.
更多
查看译文
关键词
adversarial examples,lipschitz constant
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要