谷歌浏览器插件
订阅小程序
在清言上使用

Attacking Click-through Rate Predictors via Generating Realistic Fake Samples

ACM Transactions on Knowledge Discovery from Data(2024)

引用 0|浏览1
暂无评分
摘要
How to construct imperceptible (realistic) fake samples is critical in adversarial attacks. Due to the sample feature diversity of a recommender system (containing both discrete and continuous features), traditional gradient-based adversarial attack methods may fail to construct realistic fake samples. Meanwhile, most recommendation models adopt click-through rate (CTR) predictors, which usually utilize black-box deep models with discrete features as input. Thus, how to efficiently construct realistic fake samples for black-box recommender systems is still full of challenges. In this article, we propose a hierarchical adversarial attack method against black-box CTR models via generating realistic fake samples, named CTRAttack. To better train the generation network, the weights of its embedding layer are shared with those of the substitute model, with both the similarity loss and classification loss used to update the generation network. To ensure that the discrete features of the generated fake samples are all real, we first adopt the similarity loss to ensure that the distribution of the generated perturbed samples is sufficiently close to the distribution of the real features, and then the nearest neighbor algorithm is used to retrieve the most appropriate features for non-existent discrete features from the candidate instance set. Extensive experiments demonstrate that CTRAttack can not only effectively attack the black-box recommender systems but also improve the robustness of these models while maintaining prediction accuracy.
更多
查看译文
关键词
Recommender system,deep learning,robustness,adversarial attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要