Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters
CoRR(2024)
摘要
Mixture of Experts (MoE) architectures have recently started burgeoning due
to their ability to scale model's capacity while maintaining the computational
cost affordable. Furthermore, they can be applied to both Transformers and
State Space Models, the current state-of-the-art models in numerous fields.
While MoE has been mostly investigated for the pre-training stage, its use in
parameter-efficient transfer learning settings is under-explored. To narrow
this gap, this paper attempts to demystify the use of MoE for
parameter-efficient fine-tuning of Audio Spectrogram Transformers to audio and
speech downstream tasks. Specifically, we propose Soft Mixture of Adapters
(Soft-MoA). It exploits adapters as the experts and, leveraging the recent Soft
MoE method, it relies on a soft assignment between the input tokens and experts
to keep the computational time limited. Extensive experiments across 4
benchmarks demonstrate that Soft-MoA outperforms the single adapter method and
performs on par with the dense MoA counterpart. We finally present ablation
studies on key elements of Soft-MoA, showing for example that Soft-MoA achieves
better scaling with more experts, as well as ensuring that all experts
contribute to the computation of the output tokens, thus dispensing with the
expert imbalance issue.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要