Self-supervised Learning of Geometrically Stable Features Through Probabilistic Introspection

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition(2018)

引用 69|浏览95
暂无评分
摘要
Self-supervision can dramatically cut back the amount of manually-labelled data required to train deep neural networks. While self-supervision has usually been considered for tasks such as image classification, in this paper we aim at extending it to geometry-oriented tasks such as semantic matching and part detection. We do so by building on several recent ideas in unsupervised landmark detection. Our approach learns dense distinctive visual descriptors from an unlabelled dataset of images using synthetic image transformations. It does so by means of a robust probabilistic formulation that can introspectively determine which image regions are likely to result in stable image matching. We show empirically that a network pre-trained in this manner requires significantly less supervision to learn semantic object parts compared to numerous pre-training alternatives. We also show that the pre-trained representation is excellent for semantic object matching.
更多
查看译文
关键词
supervised learning,probabilistic introspection,deep neural networks,image classification,unsupervised landmark detection,synthetic image transformations,robust probabilistic formulation,image regions,pre-trained representation,semantic object matching,image matching,geometry-oriented tasks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要