Modal-Regression-Based Structured Low-Rank Matrix Recovery for Multiview Learning

arxiv(2021)

引用 6|浏览88
暂无评分
摘要
Low-rank Multiview Subspace Learning (LMvSL) has shown great potential in cross-view classification in recent years. Despite their empirical success, existing LMvSL-based methods are incapable of handling well view discrepancy and discriminancy simultaneously, which, thus, leads to performance degradation when there is a large discrepancy among multiview data. To circumvent this drawback, motivated by the block-diagonal representation learning, we propose structured low-rank matrix recovery (SLMR), a unique method of effectively removing view discrepancy and improving discriminancy through the recovery of the structured low-rank matrix. Furthermore, recent low-rank modeling provides a satisfactory solution to address the data contaminated by the predefined assumptions of noise distribution, such as Gaussian or Laplacian distribution. However, these models are not practical, since complicated noise in practice may violate those assumptions and the distribution is generally unknown in advance. To alleviate such a limitation, modal regression is elegantly incorporated into the framework of SLMR (termed MR-SLMR). Different from previous LMvSL-based methods, our MR-SLMR can handle any zero-mode noise variable that contains a wide range of noise, such as Gaussian noise, random noise, and outliers. The alternating direction method of multipliers (ADMM) framework and half-quadratic theory are used to optimize efficiently MR-SLMR. Experimental results on four public databases demonstrate the superiority of MR-SLMR and its robustness to complicated noise.
更多
查看译文
关键词
Block-diagonal representation learning,cross-view classification,low-rank representation,multiview learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要