Improving Mandarin End-to-End Speech Synthesis by Self-Attention and Learnable Gaussian Bias

2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)(2019)

引用 12|浏览16
暂无评分
摘要
Compared to conventional speech synthesis, end-to-end speech synthesis has achieved much better naturalness with more simplified system building pipeline. End-to-end framework can generate natural speech directly from characters for English. But for other languages like Chinese, recent studies have indicated that extra engineering features are still needed for model robustness and naturalness, e.g, word boundaries and prosody boundaries, which makes the front-end pipeline as complicated as the traditional approach. To maintain the naturalness of generated speech and discard language-specific expertise as much as possible, in Mandarin TTS, we introduce a novel self-attention based encoder with learnable Gaussian bias in Tacotron. We evaluate different systems with and without complex prosody information and results show that the proposed approach has the ability to generate stable and natural speech with minimum language-dependent front-end modules.
更多
查看译文
关键词
Tacotron,end-to-end,speech synthesis,self-attention,Gaussian bias
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要