Fairmod: making predictions fair in multiple protected attributes

Knowledge and Information Systems(2024)

引用 0|浏览3
暂无评分
摘要
Predictive models such as decision trees and neural networks may produce predictions with unfairness. Some algorithms have been proposed to mitigate unfair predictions when protected attributes are considered individually. However, mitigating unfair predictions becomes more difficult when the application involves multiple protected attributes that are expected to be enforced simultaneously. This issue has not been solved, and existing methods are not able to solve it. The paper aims to be the first to solve this problem and proposes a method for post-processing unfair predictions to achieve fair ones. The method considers multiple simultaneous protected attributes together with context attributes, such as position, profession and education, that describe contextual details of the application. Our method consists of two steps. The first step uses a nonlinear optimization problem to determine the best adjustment plan for meeting the requirements of multiple simultaneous protected attributes while better preserving the original predictions. This optimization guarantees the solution to handle the interaction among multiple protected attributes regarding fairness in the best manner. The second steps learns adjustment thresholds using the results of optimization. The proposed method is evaluated using real-world datasets, and the evaluation shows that the proposed method makes effective adjustments to meet fairness requirements.
更多
查看译文
关键词
Fairness computing,Discrimination-aware,Post-processing,Multiple protected attributes
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要