Chrome Extension
WeChat Mini Program
Use on ChatGLM

Effect and Analysis of Large-scale Language Model Rescoring on Competitive ASR Systems

Conference of the International Speech Communication Association (INTERSPEECH)(2022)

Cited 2|Views22
No score
Abstract
Large-scale language models (LLMs) such as GPT-2, BERT and RoBERTa have been successfully applied to ASR N-best rescoring. However, whether or how they can benefit competitive, near state-of-the-art ASR systems remains unexplored. In this study, we incorporate LLM rescoring into one of the most competitive ASR baselines: the Conformer-Transducer model. We demonstrate that consistent improvement is achieved by the LLM's bidirectionality, pretraining, in-domain finetuning and context augmentation. Furthermore, our lexical analysis sheds light on how each of these components may be contributing to the ASR performance.
More
Translated text
Key words
speech recognition, large-scale language models, N-best rescoring
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined