More Communication-efficient Distributed Sparse Learning

Xingcai Zhou, Guang Yang

Information Sciences(2024)

引用 0|浏览0
暂无评分
摘要
In a modern distributed learning framework, the speeds of intra-worker calculation and inter-worker communication may differ by 1,000 times. It is advisable to perform expensive calculations on worker machines and communicate as few rounds as possible. In this paper, we propose three novel distributed sparse learning algorithms (EDSL-ET, EDSL-IBCD, and EDSL-IBCD-ET) in high dimensions, which are efficient in both computation and communication. Algorithm EDSL-ET utilizes the Topk sparse technique to reduce communication costs and adopts error feedback to guarantee convergence. Algorithm EDSL-IBCD greatly reduces communication costs by introducing an independent block coordinate descent method. Algorithm EDSL-IBCD-ET takes advantage of EDSL-ET and EDSL-IBCD algorithms for extra-high dimensional feature learning, which is the best one whether in communication, computation, and stability. We give theoretical guarantees of the three algorithms under mild conditions, which match the performance of the centralized algorithm. Extensive experiments on simulated and real data validate our theoretical analysis and demonstrate that the proposed algorithms perform well with just a few rounds of communications.
更多
查看译文
关键词
Distributed learning,Communication efficient,Topk,Error feedback,Gradient sparse,Independent coordinate block
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要