FedAnchor: Enhancing Federated Semi-Supervised Learning with Label Contrastive Loss for Unlabeled Clients
CoRR(2024)
摘要
Federated learning (FL) is a distributed learning paradigm that facilitates
collaborative training of a shared global model across devices while keeping
data localized. The deployment of FL in numerous real-world applications faces
delays, primarily due to the prevalent reliance on supervised tasks. Generating
detailed labels at edge devices, if feasible, is demanding, given resource
constraints and the imperative for continuous data updates. In addressing these
challenges, solutions such as federated semi-supervised learning (FSSL), which
relies on unlabeled clients' data and a limited amount of labeled data on the
server, become pivotal. In this paper, we propose FedAnchor, an innovative FSSL
method that introduces a unique double-head structure, called anchor head,
paired with the classification head trained exclusively on labeled anchor data
on the server. The anchor head is empowered with a newly designed label
contrastive loss based on the cosine similarity metric. Our approach mitigates
the confirmation bias and overfitting issues associated with pseudo-labeling
techniques based on high-confidence model prediction samples. Extensive
experiments on CIFAR10/100 and SVHN datasets demonstrate that our method
outperforms the state-of-the-art method by a significant margin in terms of
convergence rate and model accuracy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要