Learning Transferable Representations for Image Anomaly Localization Using Dense Pretraining.

IEEE/CVF Winter Conference on Applications of Computer Vision(2024)

引用 0|浏览3
暂无评分
摘要
Image anomaly localization (IAL) is widely applied in fault detection and industrial inspection domains to discover anomalous patterns in images at the pixel level. The unique challenge of this task is the lack of comprehensive anomaly samples for model training. The state-of-the-art methods train end-to-end models that leverage outlier exposure to simulate pseudo anomalies, but they show poor transferability to new datasets due to the inherent bias to the synthesized outliers during training. Recently, two-stage instance-level self-supervised learning (SSL) has shown potential in learning generic representations for IAL. However, we hypothesize that dense-level SSL is more compatible as IAL requires pixel-level prediction. In this paper, we bridge these gaps by proposing a two-stage, dense pretraining model tailored for the IAL task. More specifically, our model utilizes dual positive-pair selection criteria and dual feature scales to learn more effective representations. Through extensive experiments, we show that our learned representations achieve significantly better anomaly localization performance among two-stage models, while requiring almost half the convergence time. Moreover, our learned representations have better transferability to unseen datasets. Code is available at https://github.com/terrlo/DS2.
更多
查看译文
关键词
Algorithms,Image recognition and understanding,Algorithms,Machine learning architectures,formulations,and algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要