ViG-Bias: Visually Grounded Bias Discovery and Mitigation
arxiv(2024)
摘要
The proliferation of machine learning models in critical decision making
processes has underscored the need for bias discovery and mitigation
strategies. Identifying the reasons behind a biased system is not
straightforward, since in many occasions they are associated with hidden
spurious correlations which are not easy to spot. Standard approaches rely on
bias audits performed by analyzing model performance in pre-defined subgroups
of data samples, usually characterized by common attributes like gender or
ethnicity when it comes to people, or other specific attributes defining
semantically coherent groups of images. However, it is not always possible to
know a-priori the specific attributes defining the failure modes of visual
recognition systems. Recent approaches propose to discover these groups by
leveraging large vision language models, which enable the extraction of
cross-modal embeddings and the generation of textual descriptions to
characterize the subgroups where a certain model is underperforming. In this
work, we argue that incorporating visual explanations (e.g. heatmaps generated
via GradCAM or other approaches) can boost the performance of such bias
discovery and mitigation frameworks. To this end, we introduce Visually
Grounded Bias Discovery and Mitigation (ViG-Bias), a simple yet effective
technique which can be integrated to a variety of existing frameworks to
improve both, discovery and mitigation performance. Our comprehensive
evaluation shows that incorporating visual explanations enhances existing
techniques like DOMINO, FACTS and Bias-to-Text, across several challenging
datasets, including CelebA, Waterbirds, and NICO++.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要