Classification Utility, Fairness, and Compactness via Tunable Information Bottleneck and Rényi Measures.

IEEE Transactions on Information Forensics and Security(2024)

引用 0|浏览3
暂无评分
摘要
Designing machine learning algorithms that are accurate yet fair, not discriminating based on any sensitive attribute, is of paramount importance for society to accept AI for critical applications. In this article, we propose a novel fair representation learning method termed the Rényi Fair Information Bottleneck Method (RFIB) which incorporates constraints for utility, fairness, and compactness (compression) of representation, and apply it to image and tabular data classification. A key attribute of our approach is that we consider - in contrast to most prior work - both demographic parity and equalized odds as fairness constraints, allowing for a more nuanced satisfaction of both criteria. Leveraging a variational approach, we show that our objectives yield a loss function involving classical Information Bottleneck (IB) measures and establish an upper bound in terms of two Rényi measures of order $ \boldsymbol {\alpha }$ on the mutual information IB term measuring compactness between the input and its encoded embedding. We study the influence of the $ \boldsymbol {\alpha }$ parameter as well as two other tunable IB parameters on achieving utility/fairness trade-off goals, and show that the $ \boldsymbol {\alpha }$ parameter gives an additional degree of freedom that can be used to control the compactness of the representation. Experimenting on three different image datasets (EyePACS, CelebA, and FairFace) and two tabular datasets (Adult and COMPAS), using both binary and categorical sensitive attributes, we show that on various utility, fairness, and compound utility/fairness metrics RFIB outperforms current state-of-the-art approaches.
更多
查看译文
关键词
Deep learning,fair representation learning,equalized odds,demographic parity,classification,information bottleneck (IB),Rényi divergence,Rényi cross-entropy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要