Instance-Dependent Positive and Unlabeled Learning With Labeling Bias Estimation

IEEE Transactions on Pattern Analysis and Machine Intelligence(2022)

引用 31|浏览105
暂无评分
摘要
This paper studies instance-dependent P ositive and U nlabeled (PU) classification, where whether a positive example will be labeled (indicated by $s$ ) is not only related to the class label $y$ , but also depends on the observation $\mathbf {x}$ . Therefore, the labeling probability on positive examples is not uniform as previous works assumed, but is biased to some simple or critical data points. To depict the above dependency relationship, a graphical model is built in this paper which further leads to a maximization problem on the induced likelihood function regarding $P(s,y|\mathbf {x})$ . By utilizing the well-known EM and Adam optimization techniques, the labeling probability of any positive example $P(s=1|y=1,\mathbf {x})$ as well as the classifier induced by $P(y|\mathbf {x})$ can be acquired. Theoretically, we prove that the critical solution always exists, and is locally unique for linear model if some sufficient conditions are met. Moreover, we upper bound the generalization error for both linear logistic and non-linear network instantiations of our algorithm, with the convergence rate of expected risk to empirical risk as $\mathcal {O}(1/\sqrt{k}+1/\sqrt{n-k}+1/\sqrt{n})$ ( $k$ and $n$ are the sizes of positive set and the entire training set, respectively). Empirically, we compare our method with state-of-the-art instance-independent and instance-dependent PU algorithms on a wide range of synthetic, benchmark and real-world datasets, and the experimental results firmly demonstrate the advantage of the proposed method over the existing PU approaches.
更多
查看译文
关键词
Instance-dependent PU learning,labeling bias,maximum likelihood estimation,solution uniqueness,generalization bound
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要