Provable Preimage Under-Approximation for Neural Networks (Full Version)
Tools and Algorithms for the Construction and Analysis of Systems Lecture Notes in Computer Science(2023)
摘要
Neural network verification mainly focuses on local robustness properties,
which can be checked by bounding the image (set of outputs) of a given input
set. However, often it is important to know whether a given property holds
globally for the input domain, and if not then for what proportion of the input
the property is true. To analyze such properties requires computing preimage
abstractions of neural networks. In this work, we propose an efficient anytime
algorithm for generating symbolic under-approximations of the preimage of any
polyhedron output set for neural networks. Our algorithm combines a novel
technique for cheaply computing polytope preimage under-approximations using
linear relaxation, with a carefully-designed refinement procedure that
iteratively partitions the input region into subregions using input and ReLU
splitting in order to improve the approximation. Empirically, we validate the
efficacy of our method across a range of domains, including a high-dimensional
MNIST classification task beyond the reach of existing preimage computation
methods. Finally, as use cases, we showcase the application to quantitative
verification and robustness analysis. We present a sound and complete algorithm
for the former, which exploits our disjoint union of polytopes representation
to provide formal guarantees. For the latter, we find that our method can
provide useful quantitative information even when standard verifiers cannot
verify a robustness property.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要