WaveBYOL: Self-Supervised Learning for Audio Representation From Raw Waveforms.

Sunghyun Kim,Yong-Hoon Choi

IEEE Access(2023)

引用 0|浏览23
暂无评分
摘要
In this paper, we propose the WaveBYOL model, which can learn general-purpose audio representations directly from raw waveforms based on the bootstrap your own latent (BYOL) approach, a Siamese neural network architecture. WaveBYOL does not extract features in a handcrafted manner, and the model learns general-purpose audio representations from raw waveforms by itself. Thus, the model can be easily applied to various downstream tasks. The augmentation layer in the WaveBYOL model is designed to create various views from the time domain of the raw audio waveforms; the encoding layer is designed to learn representations by extracting features from the views, which are augmented audio waveforms. We assess the representations learned by WaveBYOL by conducting experiments with seven audio downstream tasks under both frozen-model evaluation and fine-tuning settings. The accuracy, precision, recall, and F1-score are observed as performance evaluation metrics of the proposed model, and the accuracy score is compared with those of the existing models. In most downstream tasks, WaveBYOL achieves competitive performance compared to that of the recently developed state-of-the-art models such as contrastive learning for audio (COLA), BYOL for audio (BYOL-A), self-supervised audio spectrogram transformer (SSAST), audio representation learning with teacher-student transformer (ATST), and DeLoRes. Our implementation and pretrained models are located on GitHub.
更多
查看译文
关键词
Self-supervised learning (SSL),audio waveform augmentation,audio representation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要