A Cluster-based Solution to Achieve Fairness in Federated Learning.

ISPA/BDCloud/SocialCom/SustainCom(2020)

引用 3|浏览14
暂无评分
摘要
Recently, privacy has become a major social issue since personal data is collected and analyzed from different IoT devices. To prevent the disclosure of private information, and violation of data protection rules, Federated Learning caters to this need. It provides a way to train a global model without exposing raw personal data. One of the most popular tools in this paradigm is Federated Averaging, where a few selected devices are forwarded to a global model, and the gradients thus obtained are averaged at the server. However, this aggregated global model faces the problem of dissimilar performance over clients due to unbalanced data and non-Independent and identically distributed data. In this paper, we proposed a novel framework called cluster-based Federated Averaging to achieve a fair global model by organizing the devices into groups and selecting clients from each group equally. In this way, the accuracy of the minority group could be improved significantly at the low expense of the majority group. To follow the federated learning's instinct of privacy protection, we adapt the training weights as the features to divide the users ensure the clients' training data does not leave their devices. We applied our framework on three popular datasets in machine learning: MNIST, Fashion MNIST, and Cifar-10. The experiments demonstrated that our framework could train a fair shared model effectively and efficiently.
更多
查看译文
关键词
Federated Learning,Federated Averaging,Fairness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要