Feature Alignment: Rethinking Efficient Active Learning via Proxy in the Context of Pre-trained Models
CoRR(2024)
摘要
Fine-tuning the pre-trained model with active learning holds promise for
reducing annotation costs. However, this combination introduces significant
computational costs, particularly with the growing scale of pre-trained models.
Recent research has proposed proxy-based active learning, which pre-computes
features to reduce computational costs. Yet, this approach often incurs a
significant loss in active learning performance, which may even outweigh the
computational cost savings. In this paper, we argue the performance drop stems
not only from pre-computed features' inability to distinguish between
categories of labeled samples, resulting in the selection of redundant samples
but also from the tendency to compromise valuable pre-trained information when
fine-tuning with samples selected through the proxy model. To address this
issue, we propose a novel method called aligned selection via proxy to update
pre-computed features while selecting a proper training method to inherit
valuable pre-training information. Extensive experiments validate that our
method significantly improves the total cost of efficient active learning while
maintaining computational efficiency.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要