Chrome Extension
WeChat Mini Program
Use on ChatGLM

Learning to Jointly Transcribe and Subtitle for End-To-End Spontaneous Speech Recognition

2022 IEEE Spoken Language Technology Workshop (SLT)(2023)

Cited 0|Views23
No score
Abstract
TV subtitles are a rich source of transcriptions of many types of speech, ranging from read speech in news reports to conversational and spontaneous speech in talk shows and soaps. However, subtitles are not verbatim (i.e. exact) transcriptions of speech, so they cannot be used directly to improve an Automatic Speech Recognition (ASR) model. We propose a multitask dual-decoder Transformer model that jointly performs ASR and automatic subtitling. The ASR decoder (possibly pre-trained) predicts the verbatim output and the subtitle decoder generates a subtitle, while sharing the encoder. The two decoders can be independent or connected. The model is trained to perform both tasks jointly, and is able to effectively use subtitle data. We show improvements on regular ASR and on spontaneous and conversational ASR by incorporating the additional subtitle decoder. The method does not require preprocessing (aligning, filtering, pseudo-labeling,…) of the subtitles.
More
Translated text
Key words
speech recognition,multitask learning,end-to-end,subtitles,spontaneous speech
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined