IMPROVING ENTITY RECALL IN AUTOMATIC SPEECH RECOGNITION WITH NEURAL EMBEDDINGS

2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)(2021)

Cited 2|Views28
No score
Abstract
Automatic speech recognition (ASR) systems often have difficulty recognizing long-tail entities such as contact names and local restaurant names, which usually do not occur, or occur infrequently, in the system's training data. In this work, we present a method which uses learned text embeddings and nearest neighbor retrieval within a large database of entity embeddings to correct misrecognitions. Our text embeddings are produced by a neural network trained so that the embeddings of acoustically confusable phrases have low cosine distances. Given the embedding of the text of a potential entity mis-recognition and a precomputed database containing entities and their corresponding embeddings, we use fast, scalable nearest neighbor retrieval algorithms to find candidate corrections within the database. The inserted candidates are then scored using a function of the original text's cost in the lattice and the distance between the embedding of the original text and the embedding of the candidate correction. Using this lattice augmentation techique, we demonstrate a 46% reduction in word error rate (WER) and 46% reduction in oracle word error rate (OWER) on an evaluation set with popular film queries.
More
Translated text
Key words
Embeddings, End-to-End ASR, Entity Injection, Contextual ASR
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined