Navigate Beyond Shortcuts: Debiased Learning through the Lens of Neural Collapse
CVPR 2024(2024)
Abstract
Recent studies have noted an intriguing phenomenon termed Neural Collapse,
that is, when the neural networks establish the right correlation between
feature spaces and the training targets, their last-layer features, together
with the classifier weights, will collapse into a stable and symmetric
structure. In this paper, we extend the investigation of Neural Collapse to the
biased datasets with imbalanced attributes. We observe that models will easily
fall into the pitfall of shortcut learning and form a biased, non-collapsed
feature space at the early period of training, which is hard to reverse and
limits the generalization capability. To tackle the root cause of biased
classification, we follow the recent inspiration of prime training, and propose
an avoid-shortcut learning framework without additional training complexity.
With well-designed shortcut primes based on Neural Collapse structure, the
models are encouraged to skip the pursuit of simple shortcuts and naturally
capture the intrinsic correlations. Experimental results demonstrate that our
method induces better convergence properties during training, and achieves
state-of-the-art generalization performance on both synthetic and real-world
biased datasets.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined