Exploiting Audio-Visual Features with Pretrained AV-HuBERT for Multi-Modal Dysarthric Speech Reconstruction
CoRR(2024)
摘要
Dysarthric speech reconstruction (DSR) aims to transform dysarthric speech
into normal speech by improving the intelligibility and naturalness. This is a
challenging task especially for patients with severe dysarthria and speaking in
complex, noisy acoustic environments. To address these challenges, we propose a
novel multi-modal framework to utilize visual information, e.g., lip movements,
in DSR as extra clues for reconstructing the highly abnormal pronunciations.
The multi-modal framework consists of: (i) a multi-modal encoder to extract
robust phoneme embeddings from dysarthric speech with auxiliary visual
features; (ii) a variance adaptor to infer the normal phoneme duration and
pitch contour from the extracted phoneme embeddings; (iii) a speaker encoder to
encode the speaker's voice characteristics; and (iv) a mel-decoder to generate
the reconstructed mel-spectrogram based on the extracted phoneme embeddings,
prosodic features and speaker embeddings. Both objective and subjective
evaluations conducted on the commonly used UASpeech corpus show that our
proposed approach can achieve significant improvements over baseline systems in
terms of speech intelligibility and naturalness, especially for the speakers
with more severe symptoms. Compared with original dysarthric speech, the
reconstructed speech achieves 42.1% absolute word error rate reduction for
patients with more severe dysarthria levels.
更多查看译文
关键词
dysarthric speech reconstruction,multi-modal,audio-visual,AV-HuBERT
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要