Simultaneously Learning Neighborship and Projection Matrix for Supervised Dimensionality Reduction.

IEEE transactions on neural networks and learning systems(2019)

引用 36|浏览30
暂无评分
摘要
Explicitly or implicitly, most dimensionality reduction methods need to determine which samples are neighbors and the similarities between the neighbors in the original high-dimensional space. The projection matrix is then learnt on the assumption that the neighborhood information, e.g., the similarities, are known and fixed prior to learning. However, it is difficult to precisely measure the intrinsic similarities of samples in high-dimensional space because of the curse of dimensionality. Consequently, the neighbors selected according to such similarities and the projection matrix obtained according to such similarities and the corresponding neighbors might not be optimal in the sense of classification and generalization. To overcome this drawback, in this paper, we propose to let the similarities and neighbors be variables and model these in a low-dimensional space. Both the optimal similarity and projection matrix are obtained by minimizing a unified objective function. Nonnegative and sum-to-one constraints on the similarity are adopted. Instead of empirically setting the regularization parameter, we treat it as a variable to be optimized. It is interesting that the optimal regularization parameter is adaptive to the neighbors in a low-dimensional space and has an intuitive meaning. Experimental results on the YALE B, COIL-100, and MNIST data sets demonstrate the effectiveness of the proposed method.
更多
查看译文
关键词
Dimensionality reduction,Training,Feature extraction,Toy manufacturing industry,Learning systems,Deep learning,Sparse matrices
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要