RRFL: A rational and reliable federated learning incentive framework for mobile crowdsensing

Qingyi He,Youliang Tian, Shuai Wang,Jinbo Xiong

JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES(2024)

引用 0|浏览6
暂无评分
摘要
The data privacy concern for mobile users (MUs) in mobile crowdsensing (MCS) has attracted significant attention. Federated Learning (FL) breaks down data silos, enabling MUs to train locally without revealing actual information. However, FL faces challenges from the selfish and malicious behavior of MUs, potentially harming the global model's performance. To tackle the challenges, we propose a rational, reliable FL framework (RRFL) for MCS. Firstly, utilizing Euclidean distance and tracking malicious behavior frequency, we calculate risk scores for MUs and eliminate outlier updates. Secondly, we design a long-term, fair incentive mechanism, evaluating MUs' comprehensive reputation based on risk scores from their historical sensing tasks. Rewards are allocated exclusively to consistently outstanding MUs, encouraging honest cooperation in MCS. Finally, we construct an extensive game with imperfect information, deriving the sequential equilibrium to validate the scheme's reasonableness. Experimental verification on the MNIST dataset demonstrates the effectiveness and reliability of RRFL, with results indicating strong accuracy and overall cost performance. MCS participants achieve the desired maximum utility, with over a 50% reduction in detection costs compared to short-term FL incentive mechanisms in MCS.
更多
查看译文
关键词
Federated learning,Incentive mechanism,Malicious behavior,Reputation evaluation,Game theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要