Chrome Extension
WeChat Mini Program
Use on ChatGLM

Differentially Private Federated Learning for Multitask Objective Recognition

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS(2024)

Cited 0|Views15
No score
Abstract
Many machine learning models are naturally multitask, which may involve regression and classification tasks, in which they can be trained by the multitask network to yield a more generalized model with the aid of correlated features. When these learning models are deployed on Internet-of-Things devices, the computation efficiency and the privacy of the data can pose a significant challenge to developing a federated learning (FL) algorithm for both higher learning performance and better privacy protection. In this article, a new FL framework is proposed for a class of multitask learning problems with hard parameter-sharing model through which the learning tasks are reformulated as a multiobjective optimization problem for better performance. Specifically, the stochastic multiple gradient descent approach and differential privacy are integrated into this FL algorithm for achieving a Pareto optimality that obtains a good tradeoff among different learning tasks while providing data protection. The outstanding performance of this algorithm is demonstrated by the empirical experiments on multiMINIST, the Chinese city parking dataset, and Cityscapes dataset.
More
Translated text
Key words
Task analysis,Computational modeling,Optimization,Federated learning,Deep learning,Training,Servers,Differential privacy (DP),federated learning (FL),multiobjective optimization (MOO),objective recognition
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined