谷歌浏览器插件
订阅小程序
在清言上使用

Few-Shot Learning With Enhancements to Data Augmentation and Feature Extraction.

IEEE transactions on neural networks and learning systems(2024)

引用 0|浏览13
暂无评分
摘要
The few-shot image classification task is to enable a model to identify novel classes by using only a few labeled samples as references. In general, the more knowledge a model has, the more robust it is when facing novel situations. Although directly introducing large amounts of new training data to acquire more knowledge is an attractive solution, it violates the purpose of few-shot learning with respect to reducing dependence on big data. Another viable option is to enable the model to accumulate knowledge more effectively from existing data, i.e., improve the utilization of existing data. In this article, we propose a new data augmentation method called self-mixup (SM) to assemble different augmented instances of the same image, which facilitates the model to more effectively accumulate knowledge from limited training data. In addition to the utilization of data, few-shot learning faces another challenge related to feature extraction. Specifically, existing metric-based few-shot classification methods rely on comparing the extracted features of the novel classes, but the widely adopted downsampling structures in various networks can lead to feature degradation due to the violation of the sampling theorem, and the degraded features are not conducive to robust classification. To alleviate this problem, we propose a calibration-adaptive downsampling (CADS) that calibrates and utilizes the characteristics of different features, which can facilitate robust feature extraction and benefit classification. By improving data utilization and feature extraction, our method shows superior performance on four widely adopted few-shot classification datasets.
更多
查看译文
关键词
Data augmentation,feature extraction,few-shot learning,image classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要