Chrome Extension
WeChat Mini Program
Use on ChatGLM

Implicit Acceleration and Feature Learning inInfinitely Wide Neural Networks with Bottlenecks

CoRR(2021)

Cited 0|Views27
No score
Abstract
We analyze the learning dynamics of infinitely wide neural networks with a finite sized bottle-neck. Unlike the neural tangent kernel limit, a bottleneck in an otherwise infinite width network al-lows data dependent feature learning in its bottle-neck representation. We empirically show that a single bottleneck in infinite networks dramatically accelerates training when compared to purely in-finite networks, with an improved overall performance. We discuss the acceleration phenomena by drawing similarities to infinitely wide deep linear models, where the acceleration effect of a bottleneck can be understood theoretically.
More
Translated text
Key words
infinitely wide neural networks,bottlenecks,feature learning,neural networks
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined