Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
arxiv(2024)
摘要
Machine learning models often perform poorly under subpopulation shifts in
the data distribution. Developing methods that allow machine learning models to
better generalize to such shifts is crucial for safe deployment in real-world
settings. In this paper, we develop a family of group-aware prior (GAP)
distributions over neural network parameters that explicitly favor models that
generalize well under subpopulation shifts. We design a simple group-aware
prior that only requires access to a small set of data with group information
and demonstrate that training with this prior yields state-of-the-art
performance – even when only retraining the final layer of a previously
trained non-robust model. Group aware-priors are conceptually simple,
complementary to existing approaches, such as attribute pseudo labeling and
data reweighting, and open up promising new avenues for harnessing Bayesian
inference to enable robustness to subpopulation shifts.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要