CatNet: music source separation system with mix-audio augmentation

arxiv(2021)

引用 0|浏览21
暂无评分
摘要
Music source separation (MSS) is the task of separating a music piece into individual sources, such as vocals and accompaniment. Recently, neural network based methods have been applied to address the MSS problem, and can be categorized into spectrogram and time-domain based methods. However, there is a lack of research of using complementary information of spectrogram and time-domain inputs for MSS. In this article, we propose a CatNet framework that concatenates a UNet separation branch using spectrogram as input and a WavUNet separation branch using time-domain waveform as input for MSS. We propose an end-to-end and fully differentiable system that incorporate spectrogram calculation into CatNet. In addition, we propose a novel mix-audio data augmentation method that randomly mix audio segments from the same source as augmented audio segments for training. Our proposed CatNet MSS system achieves a state-of-the-art vocals separation source distortion ratio (SDR) of 7.54 dB, outperforming MMDenseNet of 6.57 dB evaluated on the MUSDB18 dataset.
更多
查看译文
关键词
music source separation system,mix-audio
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要