A Min-Max Optimization Framework for Multi-task Deep Neural Network Compression.

Jiacheng Guo, Huiming Sun,Minghai Qin, Hongkai Yu,Tianyun Zhang

IEEE International Symposium on Circuits and Systems(2024)

引用 0|浏览1
暂无评分
摘要
Multi-task learning is a subfield of machine learning in which the data is trained with a shared model to solve different tasks simultaneously. Instead of training multiple models corresponding to different tasks, we only need to train a single model with shared parameters by using multi-task learning. Multi-task learning highly reduces the number of parameters in the machine learning models and thus reduces the computational and storage requirements. When we apply multi-task learning on deep neural networks (DNNs), we need to further compress the model since the model size of a single DNN is still a critical challenge to many computation systems, especially for edge platforms. However, when model compression is applied to multi-task learning, it is challenging to maintain the performance of all the different tasks. To deal with this challenge, we propose a min-max optimization framework for the training of highly compressed multi-task DNN models. Our proposed framework can automatically adjust the learnable weighting factors corresponding to different tasks to guarantee that the task with worst-case performance across all the different tasks will be optimized.
更多
查看译文
关键词
multi-task learning,deep learning,weight pruning,model compression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要