Front-End Adapter: Adapting Front-End Input of Speech based Self-Supervised Learning for Speech Recognition

Changsheng Xie, Zhenqiang Ma,Changli Tang,Yujin Wang,Zhisheng Zheng

arXiv (Cornell University)(2023)

引用 0|浏览0
暂无评分
摘要
Recent years have witnessed a boom in self-supervised learning (SSL) in various areas including speech processing. Speech based SSL models present promising performance in a range of speech related tasks. However, the training of SSL models is computationally expensive and a common practice is to fine-tune a released SSL model on the specific task. It is essential to use consistent front-end input during pre-training and fine-tuning. This consistency may introduce potential issues when the optimal front-end is not the same as that used in pre-training. In this paper, we propose a simple but effective front-end adapter to address this front-end discrepancy. By minimizing the distance between the outputs of different front-ends, the filterbank feature (Fbank) can be compatible with SSL models which are pre-trained with waveform. The experiment results demonstrate the effectiveness of our proposed front-end adapter on several popular SSL models for the speech recognition task.
更多
查看译文
关键词
speech recognition,learning,front-end,self-supervised
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要