TalkNet: Fully-Convolutional Non-Autoregressive Speech Synthesis Model

arxiv(2020)

引用 19|浏览3
暂无评分
摘要
We propose TalkNet, a convolutional non-autoregressive neural model for speech synthesis. The model consists of two feed-forward convolutional networks. The first network predicts grapheme durations. An input text is expanded by repeating each symbol according to the predicted duration. The second network generates a mel-spectrogram from the expanded text. To train a grapheme duration predictor, we add the grapheme duration to the training dataset using a pre-trained Connectionist Temporal Classification (CTC)-based speech recognition model. The explicit duration prediction eliminates word skipping and repeating. Experiments on the LJSpeech dataset show that the speech quality nearly matches auto-regressive models. The model is very compact -- it has 10.8M parameters, almost 3x less than the present state-of-the-art text-to-speech models. The non-autoregressive architecture allows for fast training and inference.
更多
查看译文
关键词
synthesis,speech,model,fully-convolutional,non-autoregressive
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要