AttentionStitch: How Attention Solves the Speech Editing Problem
arxiv(2024)
摘要
The generation of natural and high-quality speech from text is a challenging
problem in the field of natural language processing. In addition to speech
generation, speech editing is also a crucial task, which requires the seamless
and unnoticeable integration of edited speech into synthesized speech. We
propose a novel approach to speech editing by leveraging a pre-trained
text-to-speech (TTS) model, such as FastSpeech 2, and incorporating a double
attention block network on top of it to automatically merge the synthesized
mel-spectrogram with the mel-spectrogram of the edited text. We refer to this
model as AttentionStitch, as it harnesses attention to stitch audio samples
together. We evaluate the proposed AttentionStitch model against
state-of-the-art baselines on both single and multi-speaker datasets, namely
LJSpeech and VCTK. We demonstrate its superior performance through an objective
and a subjective evaluation test involving 15 human participants.
AttentionStitch is capable of producing high-quality speech, even for words not
seen during training, while operating automatically without the need for human
intervention. Moreover, AttentionStitch is fast during both training and
inference and is able to generate human-sounding edited speech.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要