PRODeep: a platform for robustness verification of deep neural networks

ESEC/FSE '20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering Virtual Event USA November, 2020(2020)

引用 19|浏览45
暂无评分
摘要
Deep neural networks (DNNs) have been applied in safety-critical domains such as self driving cars, aircraft collision avoidance systems, malware detection, etc. In such scenarios, it is important to give a safety guarantee to the robustness property, namely that outputs are invariant under small perturbations on the inputs. For this purpose, several algorithms and tools have been developed recently. In this paper, we present PRODeep, a platform for robustness verification of DNNs. PRODeep incorporates constraint-based, abstraction-based, and optimisation-based robustness checking algorithms. It has a modular architecture, enabling easy comparison of different algorithms. With experimental results, we illustrate the use of the tool, and easy combination of those techniques.
更多
查看译文
关键词
Robustness, Verification, Deep Neural Networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要