PSLA: Improving Audio Tagging With Pretraining, Sampling, Labeling, and Aggregation

IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING(2021)

引用 71|浏览65
暂无评分
摘要
Audio tagging is an active research area and has a wide range of applications. Since the release of AudioSet, great progress has been made in advancing model performance, which mostly comes from the development of novel model architectures and attention modules. However, we find that appropriate training techniques are equally important for building audio tagging models with AudioSet, but have not received the attention they deserve. To fill the gap, in this work, we present PSLA, a collection of model agnostic training techniques that can noticeably boost the model accuracy including ImageNet pretraining, balanced sampling, data augmentation, label enhancement, model aggregation. While many of these techniques have been previously explored, we conduct a thorough investigation on their design choices and combine them together. By training an EfficientNet with pretraining, balanced sampling, data augmentation, and model aggregation, we obtain a single model (with 13.6 M parameters) and an ensemble model that achieve mean average precision (mAP) scores of 0.444 and 0.474 on AudioSet, respectively, outperforming the previous best system of 0.439 with 81 M parameters. In addition, our model also achieves a new state-of-the-art mAP of 0.567 on FSD50K. We also investigate the impact of label enhancement on the model performance.
更多
查看译文
关键词
Training, Tagging, Data models, Computational modeling, Speech processing, Pipelines, Tensors, Audio tagging, audio event classification, transfer learning, imbalanced learning, noisy label, ensemble
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要