Differentially Private Federated Learning for Multitask Objective Recognition

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS(2024)

引用 0|浏览1
暂无评分
摘要
Many machine learning models are naturally multitask, which may involve regression and classification tasks, in which they can be trained by the multitask network to yield a more generalized model with the aid of correlated features. When these learning models are deployed on Internet-of-Things devices, the computation efficiency and the privacy of the data can pose a significant challenge to developing a federated learning (FL) algorithm for both higher learning performance and better privacy protection. In this article, a new FL framework is proposed for a class of multitask learning problems with hard parameter-sharing model through which the learning tasks are reformulated as a multiobjective optimization problem for better performance. Specifically, the stochastic multiple gradient descent approach and differential privacy are integrated into this FL algorithm for achieving a Pareto optimality that obtains a good tradeoff among different learning tasks while providing data protection. The outstanding performance of this algorithm is demonstrated by the empirical experiments on multiMINIST, the Chinese city parking dataset, and Cityscapes dataset.
更多
查看译文
关键词
Task analysis,Computational modeling,Optimization,Federated learning,Deep learning,Training,Servers,Differential privacy (DP),federated learning (FL),multiobjective optimization (MOO),objective recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要