Conv-Adapter: Exploring Parameter Efficient Transfer Learning for ConvNets
arxiv(2022)
摘要
While parameter efficient tuning (PET) methods have shown great potential
with transformer architecture on Natural Language Processing (NLP) tasks, their
effectiveness with large-scale ConvNets is still under-studied on Computer
Vision (CV) tasks. This paper proposes Conv-Adapter, a PET module designed for
ConvNets. Conv-Adapter is light-weight, domain-transferable, and
architecture-agnostic with generalized performance on different tasks. When
transferring on downstream tasks, Conv-Adapter learns tasks-specific feature
modulation to the intermediate representations of backbones while keeping the
pre-trained parameters frozen. By introducing only a tiny amount of learnable
parameters, e.g., only 3.5
also be applied for transformer-based backbones. Conv-Adapter outperforms
previous PET baseline methods and achieves comparable or surpasses the
performance of full fine-tuning on 23 classification tasks of various domains.
It also presents superior performance on the few-shot classification with an
average margin of 3.39
detection and segmentation tasks with more than 50
comparable performance to the traditional full fine-tuning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要