Personalized Wireless Federated Learning for Large Language Models
arxiv(2024)
摘要
Large Language Models (LLMs) have revolutionized natural language processing
tasks. However, their deployment in wireless networks still face challenges,
i.e., a lack of privacy and security protection mechanisms. Federated Learning
(FL) has emerged as a promising approach to address these challenges. Yet, it
suffers from issues including inefficient handling with big and heterogeneous
data, resource-intensive training, and high communication overhead. To tackle
these issues, we first compare different learning stages and their features of
LLMs in wireless networks. Next, we introduce two personalized wireless
federated fine-tuning methods with low communication overhead, i.e., (1)
Personalized Federated Instruction Tuning (PFIT), which employs reinforcement
learning to fine-tune local LLMs with diverse reward models to achieve
personalization; (2) Personalized Federated Task Tuning (PFTT), which can
leverage global adapters and local Low-Rank Adaptations (LoRA) to
collaboratively fine-tune local LLMs, where the local LoRAs can be applied to
achieve personalization without aggregation. Finally, we perform simulations to
demonstrate the effectiveness of the proposed two methods and comprehensively
discuss open issues.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要