Learning by Erasing: Conditional Entropy Based Transferable Out-of-Distribution Detection

AAAI 2024(2024)

引用 0|浏览2
暂无评分
摘要
Detecting OOD inputs is crucial to deploy machine learning models to the real world safely. However, existing OOD detection methods require an in-distribution (ID) dataset to retrain the models. In this paper, we propose a Deep Generative Models (DGMs) based transferable OOD detection that does not require retraining on the new ID dataset. We first establish and substantiate two hypotheses on DGMs: DGMs exhibit a predisposition towards acquiring low-level features, in preference to semantic information; the lower bound of DGM's log-likelihoods is tied to the conditional entropy between the model input and target output. Drawing on the aforementioned hypotheses, we present an innovative image-erasing strategy, which is designed to create distinct conditional entropy distributions for each individual ID dataset. By training a DGM on a complex dataset with the proposed image-erasing strategy, the DGM could capture the discrepancy of conditional entropy distribution for varying ID datasets, without re-training. We validate the proposed method on the five datasets and show that, without retraining, our method achieves comparable performance to the state-of-the-art group-based OOD detection methods. The project codes will be open-sourced on our project website.
更多
查看译文
关键词
CV: Interpretability, Explainability, and Transparency,CV: Adversarial Attacks & Robustness,CV: Low Level & Physics-based Vision,ML: Representation Learning,ML: Transfer, Domain Adaptation, Multi-Task Learning,ML: Unsupervised & Self-Supervised Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要