Chrome Extension
WeChat Mini Program
Use on ChatGLM

ViLaS: Exploring the Effects of Vision and Language Context in Automatic Speech Recognition.

IEEE International Conference on Acoustics, Speech, and Signal Processing(2024)

Cited 0|Views27
No score
Abstract
Enhancing automatic speech recognition (ASR) performance by leveraging additional multimodal information has shown promising results in previous studies. However, most of these works have primarily focused on utilizing visual cues derived from human lip motions. In fact, context-dependent visual and linguistic cues can also benefit in many scenarios. In this paper, we first propose ViLaS (Vision and Language into Automatic Speech Recognition), a novel multimodal ASR model based on the continuous integrate-and-fire (CIF) mechanism, which can integrate visual and textual context simultaneously or separately, to facilitate speech recognition. Next, we introduce an effective training strategy that improves performance in modal-incomplete test scenarios. Then, to explore the effects of integrating vision and language, we create VSDial, a multimodal ASR dataset with multimodal context cues in both Chinese and English versions. Finally, empirical results are reported on the public Flickr8K and self-constructed VSDial datasets. We explore various cross-modal fusion schemes, analyze fine-grained cross-modal alignment on VSDial, and provide insights into the effects of integrating multimodal information on speech recognition.
More
Translated text
Key words
Multimodal speech recognition,multimodal machine learning,continuous integrate-and-fire
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined