Chrome Extension
WeChat Mini Program
Use on ChatGLM

Quintuple-based Representation Learning for Bipartite Heterogeneous Networks

ACM Transactions on Intelligent Systems and Technology(2024)

Cited 0|Views9
No score
Abstract
Recent years have seen rapid progress in network representation learning, which removes the need for burdensome feature engineering and facilitates downstream network-based tasks. In reality, networks often exhibit heterogeneity, which means there may exist multiple types of nodes and interactions. Heterogeneous networks raise new challenges to representation learning, as the awareness of node and edge types is required. In this paper, we study a basic building block of general heterogeneous networks, the heterogeneous networks with two types of nodes. Many problems can be solved by decomposing general heterogeneous networks into multiple bipartite ones. Recently, to overcome the demerits of non-metric measures used in the embedding space, metric learning-based approaches have been leveraged to tackle heterogeneous network representation learning. These approaches first generate triplets of samples, in which an anchor node, a positive counterpart and a negative one co-exist, and then try to pull closer positive samples and push away negative ones. However, when dealing with heterogeneous networks, even the simplest two-typed ones, triplets cannot simultaneously involve both positive and negative samples from different parts of networks. To address this incompatibility of triplet-based metric learning, in this paper, we propose a novel quintuple-based method for learning node representations in bipartite heterogeneous networks. Specifically, we generate quintuples that contain positive and negative samples from two different parts of networks. And we formulate two learning objectives that accommodate quintuple-based learning samples, a proximity-based loss that models the relations in quintuples by sigmoid probabilities, and an angular loss that more robustly maintains similarity structures. In addition, we also parameterize feature learning by using one-dimensional convolution operators around nodes’ neighborhoods. Compared with eight methods, extensive experiments on two downstream tasks manifest the effectiveness of our approach.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined