The Selected-completely-at-random Complementary Label is a Practical Weak Supervision for Multi-class Classification
arxiv(2023)
摘要
Complementary-label learning is a weakly supervised learning problem in which
each training example is associated with one or multiple complementary labels
indicating the classes to which it does not belong. Existing consistent
approaches have relied on the uniform distribution assumption to model the
generation of complementary labels, or on an ordinary-label training set to
estimate the transition matrix in non-uniform cases. However, either condition
may not be satisfied in real-world scenarios. In this paper, we propose a novel
consistent approach that does not rely on these conditions. Inspired by the
positive-unlabeled (PU) learning literature, we propose an unbiased risk
estimator based on the Selected Completely At Random assumption for
complementary-label learning. We then introduce a risk-correction approach to
address overfitting problems. Furthermore, we find that complementary-label
learning can be expressed as a set of negative-unlabeled binary classification
problems when using the one-versus-rest strategy. Extensive experimental
results on both synthetic and real-world benchmark datasets validate the
superiority of our proposed approach over state-of-the-art methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要