Communication Efficient Distributed Training with Distributed Lion
CoRR(2024)
摘要
The Lion optimizer has been a promising competitor with the AdamW for
training large AI models, with advantages on memory, computation, and sample
efficiency. In this paper, we introduce Distributed Lion, an innovative
adaptation of Lion for distributed training environments. Leveraging the sign
operator in Lion, our Distributed Lion only requires communicating binary or
lower-precision vectors between workers to the center server, significantly
reducing the communication cost. Our theoretical analysis confirms Distributed
Lion's convergence properties. Empirical results demonstrate its robustness
across a range of tasks, worker counts, and batch sizes, on both vision and
language problems. Notably, Distributed Lion attains comparable performance to
standard Lion or AdamW optimizers applied on aggregated gradients, but with
significantly reduced communication bandwidth. This feature is particularly
advantageous for training large models. In addition, we also demonstrate that
Distributed Lion presents a more favorable performance-bandwidth balance
compared to existing efficient distributed methods such as deep gradient
compression and ternary gradients.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要