Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks

ANNALS OF APPLIED PROBABILITY(2024)

引用 0|浏览1
暂无评分
摘要
In this paper, we investigate a two -layer fully connected neural network of the form f (X) = 1/root d(1) a(T )sigma (W X ), where X is an element of (d0xn) is a deterministic data matrix, W is an element of R-d1xd0 and a is an element of R-d1 are random Gaussian weights, and sigma is a nonlinear activation function. We study the limiting spectral distributions of two empirical kernel matrices associated with f (X): the empirical conjugate kernel (CK) and neural tangent kernel (NTK), beyond the linear -width regime (d(1 )asymptotic to n). We focus on the ultra-wide regime, where the width d(1) of the first layer is much larger than the sample size n. Under appropriate assumptions on X and sigma, a deformed semicircle law emerges as d(1)/n -> infinity and n -> infinity. We first prove this limiting law for generalized sample covariance matrices with some dependency. To specify it for our neural network model, we provide a nonlinear Hanson-Wright inequality suitable for neural networks with random weights and Lipschitz activation functions. We also demonstrate nonasymptotic concentrations of the empirical CK and NTK around their limiting kernels in the spectral norm, along with lower bounds on their smallest eigenvalues. As an application, we show that random feature regression induced by the empirical kernel achieves the same asymptotic performance as its limiting kernel regression under the ultra -wide regime. This allows us to calculate the asymptotic training and test errors for random feature regression using the corresponding kernel regression.
更多
查看译文
关键词
Random matrix theory,neural networks,random feature regression,neural tangent kernel
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要