Assessing Wearable Human Activity Recognition Systems Against Data Poisoning Attacks in Differentially-Private Federated Learning.

SMARTCOMP(2023)

引用 1|浏览4
暂无评分
摘要
Differentially-Private Federated Learning (DPFL) is an emerging privacy-preserving distributed machine learning paradigm that allows for the automatic recognition of human activities using wearable sensors without compromising users' sensitive data. However, this decentralized approach makes the system vulnerable to poisoning attacks, where malicious agents can inject contaminated data during local model training. This paper presents the results of our research on designing, developing, and evaluating a holistic model for data poisoning attacks in DPFL-based human activity recognition (HAR) systems. Specifically, we focus on label-flipping poisoning attacks, where the label of a sensor reading is maliciously changed during data collection. To investigate the impact of such attacks, we develop a simulator that explores key design issues, such as the correlation between the level of differential privacy, the level of poisoning, the number of communication rounds, and the number of agents in the system. Our findings shed light on the effectiveness of label contamination attacks in DPFL-based HAR systems and can inform the development of more robust and secure models.
更多
查看译文
关键词
Human Activity Recognition,Federated Learning,Distributed Machine Learning,Data Poisoning Attack,Differential Privacy,Wearable Sensing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要