Stabilizing Label Assignment for Speech Separation by Self-Supervised Pre-Training.

Interspeech(2021)

引用 5|浏览17
暂无评分
摘要
Speech separation has been well developed, with the very successful permutation invariant training (PIT) approach, although the frequent label assignment switching happening during PIT training remains to be a problem when better convergence speed and achievable performance are desired. In this paper, we propose to perform self-supervised pre-training to stabilize the label assignment in training the speech separation model. Experiments over several types of self-supervised approaches, several typical speech separation models and two different datasets showed that very good improvements are achievable if a proper self-supervised approach is chosen.
更多
查看译文
关键词
Speech Enhancement,Self-supervised Pre-train,Speech Separation,Label Permutation Switch
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要