Distributionally Robust Losses for Latent Covariate Mixtures.

Oper. Res.(2023)

引用 3|浏览31
暂无评分
摘要
Reliable Machine Learning via Structured Distributionally Robust Optimization Data sets used to train machine learning (ML) models often suffer from sampling biases and underrepresent marginalized groups. Standard machine learning models are trained to optimize average performance and perform poorly on tail subpopulations. In “Distributionally Robust Losses for Latent Covariate Mixtures,” John Duchi, Tatsunori Hashimoto, and Hongseok Namkoong formulate a DRO approach for training ML models to perform uniformly well over subpopulations. They design a worst case optimization procedure over structured distribution shifts salient in predictive applications: shifts in (a subset of) covariates. The authors propose a convex procedure that controls worst case subpopulation performance and provide finite-sample (nonparametric) convergence guarantees. Empirically, they demonstrate their worst case procedure on lexical similarity, wine quality, and recidivism prediction tasks and observe significantly improved performance across unseen subpopulations. While modern large-scale data sets often consist of heterogeneous subpopulations—for example, multiple demographic groups or multiple text corpora—the standard practice of minimizing average loss fails to guarantee uniformly low losses across all subpopulations. We propose a convex procedure that controls the worst case performance over all subpopulations of a given size. Our procedure comes with finite-sample (nonparametric) convergence guarantees on the worst-off subpopulation. Empirically, we observe on lexical similarity, wine quality, and recidivism prediction tasks that our worst case procedure learns models that do well against unseen subpopulations. Supplemental Material: The online appendix is available at https://doi.org/10.1287/opre.2022.2363 .
更多
查看译文
关键词
latent covariate mixtures,robust losses
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要