Large-Scale Robust Semisupervised Classification.
IEEE transactions on cybernetics(2019)
摘要
Semisupervised learning aims to leverage both labeled and unlabeled data to improve performance, where most of them are graph-based methods. However, the graph-based semisupervised methods are not capable for large-scale data since the computational consumption on the construction of graph Laplacian matrix is huge. On the other hand, the substantial unlabeled data in training stage of semisupervised learning could cause large uncertainties and potential threats. Therefore, it is crucial to enhance the robustness of semisupervised classification. In this paper, a novel large-scale robust semisupervised learning method is proposed in the framework of capped ℓ2,p-norm. This strategy is superior not only in computational cost because it makes the graph Laplacian matrix unnecessary, but also in robustness to outliers since the capped ℓ2,p-norm used for loss measurement. An efficient optimization algorithm is exploited to solve the nonconvex and nonsmooth challenging problem. The complexity of the proposed algorithm is analyzed and discussed in theory detailedly. Finally, extensive experiments are conducted over six benchmark data sets to demonstrate the effectiveness and superiority of the proposed method.
更多查看译文
关键词
Robustness,Semisupervised learning,Optimization,Laplace equations,Loss measurement,Computational modeling,Training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络