Efficient Few-shot Classification via Contrastive Pre-training on Web Data

IEEE transactions on artificial intelligence(2022)

引用 0|浏览18
暂无评分
摘要
Few-shot classification is challenging due to the limited data and labels. Existing algorithms usually resolve this problem by pre-training models with a considerable amount of annotated data which shares knowledge with the target domain. Nevertheless, large quantities of homogenous data samples are not always available. To tackle this obstacle, we develop a few-shot learning framework that prepares data automatically and still produces well-behaved models. This framework is implemented through conducting contrastive learning on unlabeled web images. Instead of requiring manually annotated data, this framework trains models via constructing pseudo labels. Additionally, since online data is virtually limitless and continues to be generated, the model can thus be empowered to constantly obtain up-to-date knowledge from the Internet. Furthermore, we observe that the generalization ability of learned representation is crucial for self-supervised learning. To present its importance, a naive yet efficient normalization strategy is proposed. Consequentially, this strategy boosts the accuracy of trained models significantly. We demonstrate the superiority of the proposed framework with experiments on miniImageNet, tieredImageNet and Omniglot. The results indicate that our method has surpassed previous unsupervised counterparts by a large margin and obtained performance comparable with some supervised ones.
更多
查看译文
关键词
Contrastive learning,few-shot classification,generalization ability,scarce data,web data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要