ATAL: Active Learning Using Adversarial Training for Data Augmentation

Xuanwei Lin,Ximeng Liu, Bijia Chen, Yuyang Wang,Chen Dong, Pengzhen Hu

IEEE INTERNET OF THINGS JOURNAL(2024)

引用 0|浏览0
暂无评分
摘要
Active learning (AL) tries to maximize the model's performance when the labeled data set is limited, and the annotation cost is high. Although it can be efficiently implemented in deep neural networks (DNNs), it is questionable whether the model can maintain the ability to generalize well when there are significant distributional deviations between the labeled and unlabeled data sets. In this article, we consider introducing adversarial training and adversarial samples into AL to mitigate the problem of degraded generalization performance due to different data distributions. In particular, our proposed adversarial training AL (ATAL) has two advantages, one is that adversarial training by different networks enables the network to have better prediction performance and robustness with limited labeled samples. The other is that the adversarial samples generated by the adversarial training can effectively expand the labeled data set so that the designed query function can efficiently select the most informative unlabeled samples based on the expanded labeled data set. Extensive experiments have been performed to verify the feasibility and efficiency of our proposed method, i.e., CIFAR-10 demonstrates the effectiveness of our method-new state-of-the-art robustness and accuracy are achieved.
更多
查看译文
关键词
Training,Data models,Generative adversarial networks,Labeling,Robustness,Uncertainty,Bayes methods,Active learning (AL),adversarial learning,adversarial samples,data distribution,robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要