Early alignment in two-layer networks training is a two-edged sword
CoRR(2024)
摘要
Training neural networks with first order optimisation methods is at the core
of the empirical success of deep learning. The scale of initialisation is a
crucial factor, as small initialisations are generally associated to a feature
learning regime, for which gradient descent is implicitly biased towards simple
solutions. This work provides a general and quantitative description of the
early alignment phase, originally introduced by Maennel et al. (2018) . For
small initialisation and one hidden ReLU layer networks, the early stage of the
training dynamics leads to an alignment of the neurons towards key directions.
This alignment induces a sparse representation of the network, which is
directly related to the implicit bias of gradient flow at convergence. This
sparsity inducing alignment however comes at the expense of difficulties in
minimising the training objective: we also provide a simple data example for
which overparameterised networks fail to converge towards global minima and
only converge to a spurious stationary point instead.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要