DistilALHuBERT: A Distilled Parameter Sharing Audio Representation Model.

Haoyu Wang, Siyuan Wang, Yaguang Gong,Wei-Qiang Zhang

SPML(2023)

引用 0|浏览1
暂无评分
摘要
Self-supervised pre-trained audio representation models such as Wav2vec or HuBERT have brought notable improvements to many downstream audio-related tasks, but the huge number of parameters of these pre-trained models sets a barrier to their application on memory-constrained edge devices. Recursive Transformers, represented by Albert, have proven that parameter sharing through transformer layers can obviously reduce the size of pre-trained models while maintaining most of the performance. In this paper, we propose DistilALHuBERT, a lightweight recursive transformer audio representation model distilled from Hubert. Evaluation results on the S3PRL benchmark show that DistilALHuBERT can significantly outperform the DistilHuBERT model with the same number of parameters. Our code and models are available at https://github.com/backspacetg/distilAlhubert.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要