Data Poisoning Attack against Recommender System Using Incomplete and Perturbed Data

KDD 2021(2021)

引用 38|浏览44
暂无评分
摘要
ABSTRACTRecent studies reveal that recommender systems are vulnerable to data poisoning attack due to their openness nature. In data poisoning attack, the attacker typically recruits a group of controlled users to inject well-crafted user-item interaction data into the recommendation model's training set to modify the model parameters as desired. Thus, existing attack approaches usually require full access to the training data to infer items' characteristics and craft the fake interactions for controlled users. However, such attack approaches may not be feasible in practice due to the attacker's limited data collection capability and the restricted access to the training data, which sometimes are even perturbed by the privacy preserving mechanism of the service providers. Such design-reality gap may cause failure of attacks. In this paper, we fill the gap by proposing two novel adversarial attack approaches to handle the incompleteness and perturbations in user-item interaction data. First, we propose a bi-level optimization framework that incorporates a probabilistic generative model to find the users and items whose interaction data is sufficient and has not been significantly perturbed, and leverage these users and items' data to craft fake user-item interactions. Moreover, we reverse the learning process of recommendation models and develop a simple yet effective approach that can incorporate context-specific heuristic rules to handle data incompleteness and perturbations. Extensive experiments on two datasets against three representative recommendation models show that the proposed approaches can achieve better attack performance than existing approaches.
更多
查看译文
关键词
Adversarial learning, Recommender system, Data poisoning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要