Chrome Extension
WeChat Mini Program
Use on ChatGLM

Learning Fair Representations: Mitigating Statistical Dependencies.

International Conference on Human-Computer Interaction(2024)

Cited 0|Views6
No score
Abstract
The social awareness around the possibility of machine learning algorithms making biased decisions has led to an increase in Responsible AI studies in recent years. Algorithmic fairness is one of the concepts that should be considered when designing responsible AI models. The goal of these studies is to ensure the decisions made by machine learning algorithms in automated decision-making systems are bias-free and not affected by sensitive information that may lead to discrimination and consequences for individuals. Learning a fair representation is an effective approach to mitigate algorithmic bias and has been successfully applied in this domain. The objective of these approaches is to create representations by removing sensitive information while retaining the non-sensitive information that is required. In this paper, we propose a novel fair representation framework to generate fair representation that can be easily adjusted for a range of downstream classification tasks. Our proposed algorithm integrates the β -VAE encoder with a classifier to extract meaningful features. Simultaneously, it leverages the Hilbert-Schmidt independence criterion [ 24 ] as a constraint to maintain statistical independence between the representations and the sensitive attribute. Experimental results on three benchmark datasets have demonstrated our model’s ability to create fair representations and yield a better fairness-accuracy tradeoff compared to state-of-the-art models.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined