Efficiently Satisfying Subgroup Fairness in Generalized Classification Settings

Matt King,Fahim Tajwar

semanticscholar(2019)

Cited 0|Views0
No score
Abstract
Fairness in machine learning is increasingly topical as machine learning algorithms are leveraged to predict convict recidivism, future ability to pay loans, and many other predictions which correct or not have the ability to influence individuals’ lives for decades afterwards. Fairness in classification problems has been defined in several different statistical frameworks, but often, an algorithm is considered fair if no protected group (e.g. a certain race, gender, sex, etc.) endures a significantly higher false positive rate than other groups. However, this method is flawed, as one can form intersection of different groups (called subgroups) that suffer discrimination, but the overall group might not. The following toy example demonstrates this:
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined