谷歌浏览器插件
订阅小程序
在清言上使用

HNSSL: Hard Negative-Based Self-Supervised Learning.

CVPR Workshops(2023)

引用 5|浏览8
暂无评分
摘要
Recently, learning from vast unlabeled data, especially self-supervised learning, has been emerging and attracting widespread attention. Self-supervised learning followed by supervised fine-tuning on a few labeled examples can significantly improve label efficiency and outperform standard supervised training using fully annotated data [6]. In this work, we present a novel hard negative-based self-supervised deep learning paradigm, named HNSSL. Specifically, we design a student-teacher network to generate a multi-view of the data for self-supervised learning and integrate an online hard negative pair mining into the training. Then we derive a new triplet-type loss considering both positive sample pairs and online mined hard negative sample pairs. Extensive experiments demonstrate the effectiveness of the proposed method and its components on ILSVRC-2012 based on the same backbone network. Specifically, for the linear evaluation task, the proposed HNSSL with a ResNet-50 encoder achieves the top-1 accuracy of 77.1%, which outperforms its previous counterparts by 2.8%. For the semi-supervised learning task, HNSSL with a ResNet-50 encoder obtains the top-1 accuracy of 73.4%, which outperforms the previous ResNet-50 encoder-based semi-supervised learning results by 4.6% using only 10% labels. For the task of transfer learning with linear evaluation, HNSSL with a ResNet-50 encoder achieves the best accuracy on six of seven widely used transfer learning datasets, which averagely outperforms previous ResNet-50 encoder-based transfer learning results by 2.5%.
更多
查看译文
关键词
fully annotated data,hard negative-based self-supervised,HNSSL,online hard negative pair mining,online mined hard negative sample pairs,outperform standard supervised training,previous ResNet-50 encoder,self-supervised deep learning paradigm,self-supervised learning,supervised fine-tuning,transfer learning,vast unlabeled data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要