RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)(2022)

引用 9|浏览23
暂无评分
摘要
Recent studies have started to address the concern of detecting and rejecting the out-of-distribution (OOD) samples as a major challenge in the safe deployment of deep learning (DL) models. It is desired that the DL model should only be confident about the in-distribution (ID) data which reinforces the driving principle of the OOD detection. In this paper, we propose a simple yet effective generalized OOD detection method independent of out-of-distribution datasets. Our approach relies on self-supervised feature learning of the training samples, where the embeddings lie on a compact low-dimensional space. Motivated by the recent studies that show self-supervised adversarial contrastive learning helps robustify the model, we empirically show that a pre-trained model with self-supervised contrastive learning yields a better model for uni-dimensional feature learning in the latent space. The method proposed in this work, referred to as RODD, outperforms SOTA detection performance on extensive suite of benchmark datasets on OOD detection tasks. On the CIFAR-100 benchmarks, RODD achieves a 26.97 % lower false positive rate (FPR@95) compared to SOTA methods. Our code is publicly available. 1
更多
查看译文
关键词
CIFAR-100 benchmark,unidimensional feature learning,self-supervised adversarial contrastive learning,generalized OOD detection method,SOTA detection performance,latent space,compact low-dimensional space,self-supervised feature learning,in-distribution data,DL model,deep learning models,robust out-of-distribution detection,RODD
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要