Boosting Large-scale Parallel Training Efficiency with C4: A Communication-Driven Approach
arxiv(2024)
摘要
The emergence of Large Language Models (LLMs) has necessitated the adoption
of parallel training techniques, involving the deployment of thousands of GPUs
to train a single model. Unfortunately, we have found that the efficiency of
current parallel training is often suboptimal, largely due to the following two
main issues. Firstly, hardware failures are inevitable, leading to
interruptions in the training tasks. The inability to quickly identify the
faulty components results in a substantial waste of GPU resources. Secondly,
since GPUs must wait for parameter synchronization to complete before
proceeding to the next round of computation, network congestions can greatly
increase the waiting time for GPUs. To address these challenges, this paper
introduces a communication-driven solution, namely the C4. The key insights of
C4 are two folds. First, in parallel training, collective communication
exhibits periodic and homogeneous characteristics, so any anomalies are
certainly due to some form of hardware malfunction. By leveraging this feature,
C4 can rapidly identify the faulty components, swiftly isolate the anomaly, and
restart the task, thereby avoiding resource wastage caused by delays in anomaly
detection. Second, the predictable communication model of collective
communication, involving few large flows, allows C4 to efficiently execute
traffic planning, substantially reducing network congestion. C4 has been
extensively implemented across our production systems, cutting error-induced
overhead by roughly 30
certain applications with moderate communication costs.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要