DiPaCo: Distributed Path Composition
arxiv(2024)
摘要
Progress in machine learning (ML) has been fueled by scaling neural network
models. This scaling has been enabled by ever more heroic feats of engineering,
necessary for accommodating ML approaches that require high bandwidth
communication between devices working in parallel. In this work, we propose a
co-designed modular architecture and training approach for ML models, dubbed
DIstributed PAth COmposition (DiPaCo). During training, DiPaCo distributes
computation by paths through a set of shared modules. Together with a Local-SGD
inspired optimization (DiLoCo) that keeps modules in sync with drastically
reduced communication, Our approach facilitates training across poorly
connected and heterogeneous workers, with a design that ensures robustness to
worker failures and preemptions. At inference time, only a single path needs to
be executed for each input, without the need for any model compression. We
consider this approach as a first prototype towards a new paradigm of
large-scale learning, one that is less synchronous and more modular. Our
experiments on the widely used C4 benchmark show that, for the same amount of
training steps but less wall-clock time, DiPaCo exceeds the performance of a 1
billion-parameter dense transformer language model by choosing one of 256
possible paths, each with a size of 150 million parameters.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要