Utilizing Adversarial Examples for Bias Mitigation and Accuracy Enhancement
arxiv(2024)
摘要
We propose a novel approach to mitigate biases in computer vision models by
utilizing counterfactual generation and fine-tuning. While counterfactuals have
been used to analyze and address biases in DNN models, the counterfactuals
themselves are often generated from biased generative models, which can
introduce additional biases or spurious correlations. To address this issue, we
propose using adversarial images, that is images that deceive a deep neural
network but not humans, as counterfactuals for fair model training.
Our approach leverages a curriculum learning framework combined with a
fine-grained adversarial loss to fine-tune the model using adversarial
examples. By incorporating adversarial images into the training data, we aim to
prevent biases from propagating through the pipeline. We validate our approach
through both qualitative and quantitative assessments, demonstrating improved
bias mitigation and accuracy compared to existing methods. Qualitatively, our
results indicate that post-training, the decisions made by the model are less
dependent on the sensitive attribute and our model better disentangles the
relationship between sensitive attributes and classification variables.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要