Deep Network Perceptual Losses for Speech Denoising

arxiv(2021)

引用 0|浏览30
暂无评分
摘要
Contemporary speech enhancement predominantly relies on audio transforms that are trained to reconstruct a clean speech waveform. Here we investigate whether deep feature representations learned for audio classification tasks can be used to improve denoising. We first trained deep neural networks to classify either spoken words or environmental sounds from audio. We then trained an audio transform to map noisy speech to an audio waveform that minimized 'perceptual' losses derived from the recognition network. When the transform was trained to minimize the difference in the deep feature representations between the output audio and the corresponding clean audio, it removed noise substantially better than baseline methods trained to reconstruct clean waveforms. The learned deep features were essential for this improvement, as features from untrained networks with random weights did not provide the same benefit. The results suggest the use of deep features as perceptual metrics to guide speech enhancement.
更多
查看译文
关键词
speech enhancement, denoising, deep neural networks, cochlear model, perceptual metrics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要