Distance Preserving Model Order Reduction of Graph-Laplacians and Cluster Analysis

JOURNAL OF SCIENTIFIC COMPUTING(2021)

引用 3|浏览14
暂无评分
摘要
Graph-Laplacians and their spectral embeddings play an important role in multiple areas of machine learning. This paper is focused on graph-Laplacian dimension reduction for the spectral clustering of data as a primary application, however, it can also be applied in data mining, data manifold learning, etc. Spectral embedding provides a low-dimensional parametrization of the data manifold which makes the subsequent task (e.g., clustering with k-means or any of its approximations) much easier. However, despite reducing the dimensionality of data, the overall computational cost may still be prohibitive for large data sets due to two factors. First, computing the partial eigendecomposition of the graph-Laplacian typically requires a large Krylov subspace. Second, after the spectral embedding is complete, one still has to operate with the same number of data points, which may ruin the efficiency of the approach. For example, clustering of the embedded data is typically performed with various relaxations of k-means which computational cost scales poorly with respect to the size of data set. Also, they become prone to getting stuck in local minima, so their robustness depends on the choice of initial guess. In this work, we switch the focus from the entire data set to a subset of graph vertices (target subset). We develop two novel algorithms for such low-dimensional representation of the original graph that preserves important global distances between the nodes of the target subset. In particular, it allows to ensure that target subset clustering is consistent with the spectral clustering of the full data set if one would perform such. That is achieved by a properly parametrized reduced-order model (ROM) of the graph-Laplacian that approximates accurately the diffusion transfer function of the original graph for inputs and outputs restricted to the target subset. Working with a small target subset reduces greatly the required dimension of Krylov subspace and allows to exploit the conventional algorithms (like approximations of k-means) in the regimes when they are most robust and efficient. This was verified in the numerical clustering experiments with both synthetic and real data. We also note that our ROM approach can be applied in a purely transfer-function-data-driven way, so it becomes the only feasible option for extremely large graphs that are not directly accessible. There are several uses for our algorithms. First, they can be employed on their own for representative subset clustering in cases when handling the full graph is either infeasible or simply not required. Second, they may be used for quality control. Third, as they drastically reduce the problem size, they enable the application of more sophisticated algorithms for the task under consideration (like more powerful approximations of k-means based on semi-definite programming (SDP) instead of the conventional Lloyd’s algorithm). Finally, they can be used as building blocks of a multi-level divide-and-conquer type algorithm to handle the full graph. The latter will be reported in a separate article.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要