Layer-Wise External Attention by Well-Localized Attention Map for Efficient Deep Anomaly Detection

Keiichi Nakanishi, Ryo Shiroma, Tokihisa Hayakawa,Ryoya Katafuchi,Terumasa Tokunaga

SN Computer Science(2024)

引用 0|浏览1
暂无评分
摘要
The external attention mechanism offers a promising approach to enhance image anomaly detection (Hayakawa et al., in: IMPROVE, pp. 100-–110, 2023). Nevertheless, the effectiveness of this method is contingent upon the judicious selection of an intermediate layer with external attention. In this study, we performed a comprehensive series of experiments to clarify the mechanisms through which external attention improves detection performance. We assessed the performance of the LEA-Net (Hayakawa et al., in: IMPROVE, pp. 100–110, 2023), which implements layer-wise external attention, using MVTec AD and Plant Village datasets. The detection performances of the LEA-Net were compared with that of the baseline model under different anomaly maps generated by three unsupervised approaches. In addition, we investigated the relationship between the detection performance of LEA-Net and the selection of an attention point, which means an intermediate layer where external attention is applied. The findings reveal that the synergy between the dataset and the generated anomaly map influenced the effectiveness of the LEA-Net. For poorly localized anomaly maps, the selection of the attention point becomes a pivotal factor in determining detection efficiency. At shallow attention points, a well-localized attention map successfully notably improves the detection performance. For deeper attention points, the overall intensity of the attention map is essential; this intensity can be substantially amplified by layer-wise external attention, even for a low-intensity anomaly map. Overall, the results suggest that for layer-wise external attention, the positional attributes of anomalies hold greater significance than the overall intensity or visual appearance of the anomaly map.
更多
查看译文
关键词
Anomaly detection,Visual inspection AI,Deep learning,Visual attention mechanism,Self-attention,MVTec AD,Plant science
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要