Chrome Extension
WeChat Mini Program
Use on ChatGLM

Simple Gated Convnet for Small Footprint Acoustic Modeling

2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)(2019)

Cited 3|Views20
No score
Abstract
Acoustic modeling with recurrent neural networks has shown very good performance, especially for end-to-end speech recognition. However, most recurrent neural networks require sequential computation of the output, which results in large memory access overhead when implemented in embedded devices. Convolution-based sequential modeling does not suffer from this problem; however, the model usually requires a large number of parameters. We propose simple gated convolutional neural networks (Simple Gated ConvNet) for acoustic modeling and show that the network performs very well even when the number of parameters is fairly small, less than 3 million. The Simple Gated ConvNet (SGCN) is constructed by combining the simplest form of Gated ConvNet and one-dimensional (1-D) depthwise convolution. The model has been evaluated using the Wall Street Journal (WSJ) Corpus and has shown a performance competitive to RNN-based ones. The performance of the SGCN has also been evaluated using the LibriSpeech Corpus. The developed model was implemented in ARM CPU based systems and showed the real time factor (RTF) of around 0.05.
More
Translated text
Key words
Speech recognition,Gated ConvNet,Sequence modeling,Embedded system
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined