SpatialFormer: Semantic and Target Aware Attentions for Few-Shot Learning
arxiv(2023)
摘要
Recent Few-Shot Learning (FSL) methods put emphasis on generating a
discriminative embedding features to precisely measure the similarity between
support and query sets. Current CNN-based cross-attention approaches generate
discriminative representations via enhancing the mutually semantic similar
regions of support and query pairs. However, it suffers from two problems: CNN
structure produces inaccurate attention map based on local features, and
mutually similar backgrounds cause distraction. To alleviate these problems, we
design a novel SpatialFormer structure to generate more accurate attention
regions based on global features. Different from the traditional Transformer
modeling intrinsic instance-level similarity which causes accuracy degradation
in FSL, our SpatialFormer explores the semantic-level similarity between pair
inputs to boost the performance. Then we derive two specific attention modules,
named SpatialFormer Semantic Attention (SFSA) and SpatialFormer Target
Attention (SFTA), to enhance the target object regions while reduce the
background distraction. Particularly, SFSA highlights the regions with same
semantic information between pair features, and SFTA finds potential foreground
object regions of novel feature that are similar to base categories. Extensive
experiments show that our methods are effective and achieve new
state-of-the-art results on few-shot classification benchmarks.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要