Defending Federated Learning from Backdoor Attacks: Anomaly-Aware FedAVG with Layer-Based Aggregation

2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC(2023)

引用 0|浏览4
暂无评分
摘要
Federated Learning (FL) is susceptible to backdoor adversarial attacks during the training process, which poses a significant threat to the model's performance. Existing adversarial mitigation solutions mainly rely on the neural network (NN) model statistics and discard an entire client model if attacked. This approach is not feasible as it results in suboptimal performance. Hence, it is crucial to develop lightweight backdoor attack mitigation solutions that efficiently utilize clients' model statistics. To address this issue, we propose (Layer Based Anomaly Aware) LBAA-FedAVG, a modified version of the common aggregation mechanism FedAVG. Our proposed framework employs a clustering-based technique and addresses each NN layer individually. Depending on the type of adversarial attack, this method selectively eliminates one or multiple layers of the NN during the aggregation process. Furthermore, we focused on the model inversion attack and varied the percentage of compromised clients from 10% to 50%. Our experimental findings demonstrate that LBAA-FedAVG outperforms Federated Averaging (FedAVG) in reducing the negative effects of backdoor adversarial attacks. The complexity analysis suggests that the extra training time is the only additional resource limitation in LBAA-FedAVG, which is 19% greater than that of FedAVG. Additionally, we conducted experiments on short-term load forecasting using grid-level datasets to show the effectiveness of LBAA-FedAVG in lightweight backdoor attack mitigation in FL settings, offering a trade-off between time efficiency and enhanced defense.
更多
查看译文
关键词
Neural Network,Energy forecasting,Federated learning,Backdoor attack,FedAVG
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要