Learning from Easy to Hard: Multi-Task Learning with Data Scheduling

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

引用 0|浏览6
暂无评分
摘要
Multi-task learning (MTL) aims to enhance the performance of all tasks by sharing the learned representations. However, sharing the representations may lead to performance degradation due to task conflicts. Existing MTL methods mainly focus on the relationship between tasks, ignoring that data samples can contribute differently to tasks. Inspired by curriculum learning, we consider the varying effects of data samples on tasks. We propose a novel method, Sample-Level Data Scheduling (SLDS) for MTL, which adopts a curriculum learning strategy. SLDS gradually feeds the model with data ranging from easy to hard. Samples that lead to fewer task conflicts and smaller loss values are considered easy samples and given more weight. Throughout the training process, the model is initially trained with easy data and gradually exposed to hard data. We compare SLDS with several state-of-the-art MTL methods, and experimental results show the effectiveness of our method.
更多
查看译文
关键词
multi-task learning,data scheduling,task conflicts
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要