Depth separation beyond radial functions

arXiv (Cornell University)(2022)

Cited 5|Views38
No score
Abstract
High-dimensional depth separation results for neural networks show that certain can be efficiently approximated by two-hidden-layer networks but not by one-hidden-layer ones in high-dimensions. Existing results of this type mainly focus on functions an underlying radial or one-dimensional structure, which are usually not encountered practice. The first contribution of this paper is to extend such results to a more class of functions, namely functions with piece-wise oscillatory structure, by building the proof strategy of (Eldan and Shamir, 2016). We complement these results by that, if the domain radius and the rate of oscillation of the objective function are then approximation by one-hidden-layer networks holds at a poly(d) rate for any fixed threshold.The mentioned results show that one-hidden-layer networks fail to approximate energy functions whose Fourier representation is spread in the frequency domain, they succeed at approximating functions having a sparse Fourier representation. the choice of the domain represents a source of gaps between these positive and approximation results. We conclude the paper focusing on a compact approximation main, namely the sphere Sd-1 in dimension d, where we provide a characterization functions which are efficiently approximable by one-hidden-layer networks and of which are provably not, in terms of their Fourier expansion.
More
Translated text
Key words
Neural networks, Depth separation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined