RAMS-Trans: Recurrent Attention Multi-scale Transformer for Fine-grained Image Recognition

International Multimedia Conference(2021)

引用 69|浏览34
暂无评分
摘要
ABSTRACTIn fine-grained image recognition (FGIR), the localization and amplification of region attention is an important factor, which has been explored extensively convolutional neural networks (CNNs) based approaches. The recently developed vision transformer (ViT) has achieved promising results in computer vision tasks. Compared with CNNs, Image sequentialization is a brand new manner. However, ViT is limited in its receptive field size and thus lacks local attention like CNNs due to the fixed size of its patches, and is unable to generate multi-scale features to learn discriminative region attention. To facilitate the learning of discriminative region attention without box/part annotations, we use the strength of the attention weights to measure the importance of the patch tokens corresponding to the raw images. We propose the recurrent attention multi-scale transformer (RAMS-Trans), which uses the transformer's self-attention to recursively learn discriminative region attention in a multi-scale manner. Specifically, at the core of our approach lies the dynamic patch proposal module (DPPM) responsible for guiding region amplification to complete the integration of multi-scale image patches. The DPPM starts with the full-size image patches and iteratively scales up the region attention to generate new patches from global to local by the intensity of the attention weights generated at each scale as an indicator. Our approach requires only the attention weights that come with ViT itself and can be easily trained end-to-end. Extensive experiments demonstrate that RAMS-Trans performs better than exising works, in addition to efficient CNN models, achieving state-of-the-art results on three benchmark datasets.
更多
查看译文
关键词
recurrent attention,image recognition,rams-trans,multi-scale,forfine-grained
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要