Context-Aware end-to-end ASR Using Self-Attentive Embedding and Tensor Fusion

ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2023)

Cited 0|Views7
No score
Abstract
Typical automatic speech recognition (ASR) systems are built to recognize independent utterances without using the cross-utterance context. However, the context over multiple utterances often provides useful information for the ASR task. In this work, we propose a context-aware end-to-end ASR model that injects the self-attentive context embedding into the decoder of the recurrent neural network transducer (RNN-T). We also propose a factorised 3-way tensor fusion approach to fuse the context embedding with the acoustic representations extracted from the acoustic encoder and the text representations obtained using the prediction network based on the previous subword units. Experimental results on a long-form Youtube ASR task shows that the proposed approach achieves 10.8% relative word error rate reductions.
More
Translated text
Key words
longform ASR,end-to-end ASR
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined