Towards Very Deep Representation Learning for Subspace Clustering

IEEE Transactions on Knowledge and Data Engineering(2024)

引用 0|浏览5
暂无评分
摘要
Deep subspace clustering based on the self-expressive layer has attracted increasing attention in recent years. Due to the self-expressive layer, these methods need to load the whole dataset into one batch for learning the self-expressive coefficients. Such a learning strategy puts a great burden on memory, which severely prevents from the usage of deeper network architectures (e.g., ResNet), and becomes a bottleneck for applying to large-scale data. In this paper, we propose a new deep subspace clustering framework, in order to address the above challenges. In contrast to previous approaches taking the weights of a fully connected layer as the self-expressive coefficients, we attempt to obtain the self-expressive coefficients by learning an energy based network in a mini-batch training manner. By this means, it is no longer necessary to load all data into one batch for learning, thus avoiding the above issue. Considering the powerful representation ability of the recently popular self-supervised learning, we leverage self-supervised representation learning to learn the dictionary for representing data. Finally, we propose a joint framework to learn both the self-expressive coefficients and the dictionary simultaneously. Extensive experiments on three publicly available datasets demonstrate the effectiveness of our method.
更多
查看译文
关键词
Subspace Clustering,Representation Learning,Self-Supervised Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要