Imbalanced Data Sparsity as a Source of Unfair Bias in Collaborative Filtering

Aditya Joshi, Chin Lin Wong,Diego Marinho de Oliveira, Farhad Zafari,Fernando Mourao, Sabir Ribas,Saumya Pandey

PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022(2022)

Cited 1|Views16
No score
Abstract
Collaborative Filtering (CF) is a class of methods widely used to support high-quality Recommender Systems (RSs) across several industries [6]. Studies have uncovered distinct advantages and limitations of CF in many real-world applications [5, 9]. Besides the inability to address the cold-start problem, sensitivity to data sparsity is among the main limitations recurrently associated with this class of RSs. Past work has extensively demonstrated that data sparsity critically impacts CF accuracy [2-4]. The proposed talk revisits the relation between data sparsity and CF from a new perspective, evincing that the former also impacts the fairness of recommendations. In particular, data sparsity might lead to unfair bias in domains where the volume of activity strongly correlates with personal characteristics that are protected by law (i.e., protected attributes). This concern is critical for RSs deployed in domains such as the recruitment domain, where RSs have been reported to automate or facilitate discriminatory behaviour [7]. Our work at SEEK deals with recommender algorithms that recommend jobs to candidates via SEEK's multiple channels. While this talk focuses on our perspective of the problem in the job recommendation domain, the discussion is relevant to many other domains where recommenders potentially have a social or economic impact on the lives of individuals and groups.
More
Translated text
Key words
Recommender Systems,Fairness,Data sparsity,Algorithmic Bias,Responsible AI
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined