Ada-vit: attention-guided data augmentation for vision transformers

2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP(2023)

引用 0|浏览7
暂无评分
摘要
The limitations of a machine learning model can often be traced back to the existence of under-represented regions in the feature space of the training data. Data augmentation is a common technique that has been used to inflate training datasets with new samples to improve the model performance. However, these techniques usually focus on expanding the data in size and do not necessarily aim to cover the under-represented regions of the feature space. In this paper, we propose an Attention-guided Data Augmentation technique for Vision Transformers (ADA-ViT). Our framework exploits the attention mechanism in vision transformers to extract visual concepts related to misclassified samples. The retrieved concepts describe under-represented regions in the training dataset that contributed to the misclassifications. We leverage this information to guide our data augmentation process by identifying new samples and using them to augment the training data. We hypothesize that this focused data augmentation populates under-represented regions and improves the model's accuracy. We evaluate our framework on the CUB dataset and CUB-Families. Our experiments show that ADA-ViT outperforms state-of-the-art data augmentation strategies, and can improve the accuracy of a transformer by an average margin of 2.5% on the CUB dataset and 3.3% on CUB-Families.
更多
查看译文
关键词
Vision Transformer,Data Augmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要