Adversarial Training Regularization For Negative Sampling Based Network Embedding

INFORMATION SCIENCES(2021)

引用 1|浏览24
暂无评分
摘要
The aim of network embedding is to learn compact node representations. This has been shown to be effective in various downstream learning tasks, such as link prediction and node classification. Most methods focus on preserving different network structures and properties, ignoring the fact that networks are usually noisy and incomplete, thus such methods potentially lack robustness and suffer from the overfitting issue. Recently, generative adversarial networks based methods have been exploited to impose a prior distribution on node embeddings to encourage a global smoothness, but their model architecture is very complicated and they suffer from the non-convergence problem. Here, we propose adversarial training (AdvT), a more succinct and effective local regularization method, for negative-sampling-based network embedding to improve model robustness and generalization ability. Specifically, we first define the adversarial perturbations in the embedding space instead of in the discrete graph domain to circumvent the challenge of generating discrete adversarial examples. Then, to enable more effective regularization, we design the adaptive l(2) norm constraints on adversarial perturbations that depend upon the connectivity pattern of node pairs. We integrate AdvT into several famous models including DEEPWALK, LINE and node2vec, and conduct extensive experiments on benchmark datasets to verify its effectiveness. (C) 2021 Elsevier Inc. All rights reserved.
更多
查看译文
关键词
Network Embedding, Adversarial Training, Robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要