Context dependent recurrent neural network language model.

SLT(2012)

引用 753|浏览169
暂无评分
摘要
Recurrent neural network language models (RNNLMs) have recently demonstrated state-of-the-art performance across a variety of tasks. In this paper, we improve their performance by providing a contextual real-valued input vector in association with each word. This vector is used to convey contextual information about the sentence being modeled. By performing Latent Dirichlet Allocation using a block of preceding text, we achieve a topic-conditioned RNNLM. This approach has the key advantage of avoiding the data fragmentation associated with building multiple topic models on different data subsets. We report perplexity results on the Penn Treebank data, where we achieve a new state-of-the-art. We further apply the model to the Wall Street Journal speech recognition task, where we observe improvements in word-error-rate.
更多
查看译文
关键词
natural language processing,recurrent neural nets,speech recognition,text analysis,word processing,Penn Treebank data,Wall Street Journal speech recognition task,context dependent recurrent neural network language model,contextual real-valued input vector,data fragmentation,latent Dirichlet allocation,multiple topic models,perplexity,sentence modelling,text block,topic-conditioned RNNLM,word-error-rate improvement,Language Modeling,Latent Dirichlet Allocation,Recurrent Neural Network,Topic Models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要