A rationale from frequency perspective for grokking in training neural network
CoRR(2024)
Abstract
Grokking is the phenomenon where neural networks NNs initially fit the
training data and later generalize to the test data during training. In this
paper, we empirically provide a frequency perspective to explain the emergence
of this phenomenon in NNs. The core insight is that the networks initially
learn the less salient frequency components present in the test data. We
observe this phenomenon across both synthetic and real datasets, offering a
novel viewpoint for elucidating the grokking phenomenon by characterizing it
through the lens of frequency dynamics during the training process. Our
empirical frequency-based analysis sheds new light on understanding the
grokking phenomenon and its underlying mechanisms.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined