Poster: Raising the Temporal Misalignment in Federated Learning

ICDCS(2023)

引用 0|浏览8
暂无评分
摘要
The rapid evolution of public knowledge is the trend of the present era; rendering previously collected data susceptible to obsolescence. The continuously generated new knowledge could further affect the performance of the model trained with previous data, such a phenomenon is called temporal misalignment. A vanilla mitigation approach is to periodically update the model in a centralized learning scheme. However, in a decentralized learning framework like Federated Learning (FL), such a patch requires clients to upload the data, which contradicts FL's intention to protect clients' privacy. Furthermore, considering the stationary defenses in FL, new knowledge could be misjudged and rejected as malicious attacks, which hinders the further update of the model. Yet dynamically adapting defenses requires meticulous fine-tuning and harms the scalability. Thus in this poster, we raise such practical concern and discuss it in the context of FL. We then build a prototype of a GPT2-based FL framework and conduct experiments to demonstrate our perspective. The performance in new knowledge drops by 33.47% compared with the previous data, which justify the FL with defenses strategy can misjudge the new knowledge.
更多
查看译文
关键词
Federated Learning,Temporal Misalignment,Secure Aggregation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要