SOL: Sampling-based Optimal Linear bounding of arbitrary scalar functions.

Yuriy Biktairov,Jyotirmoy Deshmukh

NeurIPS(2023)

引用 0|浏览22
暂无评分
摘要
Finding tight linear bounds for activation functions in neural networks is an essential part of several state of the art neural network robustness certification tools. An activation function is an arbitrary, nonlinear, scalar function $f: \mathbb{R}^d \rightarrow \mathbb{R}$. In the existing work on robustness certification, such bounds have been computed using human ingenuity for a handful of the most popular activation functions. While a number of heuristics have been proposed for bounding arbitrary functions, no analysis of the tightness optimality for general scalar functions has been offered yet, to the best of our knowledge. We fill this gap by formulating a concise optimality criterion for tightness of the approximation which allows us to build optimal bounds for any function convex in the region of interest $R$. For a more general class of functions Lipshitz-continuous in $R$ we propose a sampling-based approach (SOL) which, given an instance of the bounding problem, efficiently computes the tightest linear bounds within a given $\varepsilon > 0$ threshold. We leverage an adaptive sampling technique to iteratively build a set of sample points suitable for representing the target activation function. While the theoretical worst case time complexity of our approach is $O(\varepsilon^{-2d})$, it typically only takes $O(\log^{\beta} \frac{1}{\varepsilon})$ time for some $\beta \ge 1$ and is thus sufficiently fast in practice. We provide empirical evidence of SOL's practicality by incorporating it into a robustness certifier and observing that it produces similar or higher certification rates while taking as low as quarter of the time compared to the other methods.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要