Chrome Extension
WeChat Mini Program
Use on ChatGLM

Improved Training Strategies for End-to-End Speech Recognition in Digital Voice Assistants.

INTERSPEECH(2020)

Cited 2|Views10
No score
Abstract
The speech recognition training data corresponding to digital voice assistants is dominated by wake-words. Training end-to-end (E2E) speech recognition models without careful attention to such data results in sub-optimal performance as models prioritize learning wake-words. To address this problem, we propose a novel discriminative initialization strategy by introducing a regularization term to penalize model for incorrectly hallucinating wake-words in early phases of training. We also explore other training strategies such as multi-task learning with listen-attend-spell (LAS), label smoothing via probabilistic modelling of silence and use of multiple pronunciations, and show how they can be combined with the proposed initialization technique. In addition, we show the connection between cost function of proposed discriminative initialization technique and minimum word error rate (MWER) criterion. We evaluate our methods on two E2E ASR systems, a phone-based system and a word-piece based system, trained on 6500 hours of Alexa's Indian English speech corpus. We show that proposed techniques yield 20% word error rate reductions for phone based system and 6% for word-piece based system compared to corresponding baselines trained on the same data.
More
Translated text
Key words
acoustics-to-word, automatic speech recognition, connectionist temporal classification, end-to-end, initialization, voice assistant
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined