Bidirectional Recurrent Neural Network Language Models For Automatic Speech Recognition
2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2015)
摘要
Recurrent neural network language models have enjoyed great success in speech recognition, partially due to their ability to model longer-distance context than word n-gram models. In recurrent neural networks (RNNs), contextual information from past inputs is modeled with the help of recurrent connections at the hidden layer, while Long Short-Term Memory (LSTM) neural networks are RNNs that contain units that can store values for arbitrary amounts of time. While conventional unidirectional networks predict outputs from only past inputs, one can build bidirectional networks that also condition on future inputs. In this paper, we propose applying bidirectional RNNs and LSTM neural networks to language modeling for speech recognition. We discuss issues that arise when utilizing bidirectional models for speech, and compare unidirectional and bidirectional models on an English Broadcast News transcription task. We find that bidirectional RNNs significantly outperform unidirectional RNNs, but bidirectional LSTMs do not provide any further gain over their unidirectional counterparts.
更多查看译文
关键词
Language modeling,recurrent neural networks,long short term memory,bidirectional neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络