Speaker location and microphone spacing invariant acoustic modeling from raw multichannel waveforms

2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)(2015)

引用 82|浏览110
暂无评分
摘要
Multichannel ASR systems commonly use separate modules to perform speech enhancement and acoustic modeling. In this paper, we present an algorithm to do multichannel enhancement jointly with the acoustic model, using a raw waveform convolutional LSTM deep neural network (CLDNN). We will show that our proposed method offers ~5% relative improvement in WER over a log-mel CLDNN trained on multiple channels. Analysis shows that the proposed network learns to be robust to varying angles of arrival for the target speaker, and performs as well as a model that is given oracle knowledge of the true location. Finally, we show that training such a network on inputs captured using multiple (linear) array configurations results in a model that is robust to a range of microphone spacings.
更多
查看译文
关键词
array configuration,log-mel CLDNN,word error rate,WER,convolutional LSTM deep neural network,multichannel enhancement,speech enhancement,automatic speech recognition systems,multichannel ASR systems,microphone spacing invariant acoustic modeling,speaker location
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要