Robust federated learning with voting and scaling

Xiang-Yu Liang,Heng-Ru Zhang, Wei Tang,Fan Min

FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE(2024)

引用 0|浏览5
暂无评分
摘要
Federated learning is vulnerable to poisoning attacks due to the inability to verify the authenticity of local data. Existing robust federated learning methods maintain a global model by discarding potentially risky local updates. However, they generally assume that the server knows the number of potentially abnormal clients. In this paper, we propose a robust federated learning method based on voting and scaling that relaxes such assumption. Malicious updates usually manifest in abnormal direction and magnitude. On one hand, the server computes the relative-angle between the target and other local updates. Angles greater than 90 degrees are considered negative votes, otherwise positive votes. If the negative votes exceed a predefined threshold, the target is considered abnormal. On the other hand, the server computes the magnitude median of the remaining updates after filtering out updates in abnormal directions. The magnitudes of local updates above/below the median are scaled down/increased. Experiments are carried out on five datasets in comparison to five state-of-the-art algorithms. Results on the two metrics of poisoning and main task rates show that our method can effectively improve the robustness of federated learning. Source codes are available at https://github.com/liangxyswpu/lxyCode.
更多
查看译文
关键词
Abnormal direction,Abnormal magnitude,Federated learning,Scaling,Voting
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要