Towards Fairness in Provably Communication-Efficient Federated Recommender Systems
CoRR(2024)
Abstract
To reduce the communication overhead caused by parallel training of multiple
clients, various federated learning (FL) techniques use random client sampling.
Nonetheless, ensuring the efficacy of random sampling and determining the
optimal number of clients to sample in federated recommender systems (FRSs)
remains challenging due to the isolated nature of each user as a separate
client. This challenge is exacerbated in models where public and private
features can be separated, and FL allows communication of only public features
(item gradients). In this study, we establish sample complexity bounds that
dictate the ideal number of clients required for improved communication
efficiency and retained accuracy in such models. In line with our theoretical
findings, we empirically demonstrate that RS-FairFRS reduces communication cost
( 47
that raises a substantial equity concern for FRSs. Unlike centralized machine
learning, clients in FRS can not share raw data, including sensitive
attributes. For this, we introduce RS-FairFRS, first fairness under unawareness
FRS built upon random sampling based FRS. While random sampling improves
communication efficiency, we propose a novel two-phase dual-fair update
technique to achieve fairness without revealing protected attributes of active
clients participating in training. Our results on real-world datasets and
different sensitive features illustrate a significant reduction in demographic
bias ( approx40%), offering a promising path to achieving fairness and
communication efficiency in FRSs without compromising the overall accuracy of
FRS.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined