TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Machine Learning
arxiv(2023)
摘要
The surge of artificial intelligence, specifically large language models, has
led to a rapid advent towards the development of large-scale machine learning
training clusters. Collective communications within these clusters tend to be
heavily bandwidth-bound, necessitating techniques to optimally utilize the
available network bandwidth. This puts the routing algorithm for the collective
at the forefront of determining the performance. Unfortunately, communication
libraries used in distributed machine learning today are limited by a fixed set
of routing algorithms. This constraints collective performance within the
domain of next-generation training clusters that employ intricate,
heterogeneous, and asymmetric, large-scale topologies. Further, the emergence
of irregular topologies attributed to runtime phenomena such as device failures
serves to compound the complexity of the challenge. To this end, this paper
introduces TACOS, an automated synthesizer that generates topology-aware
collective algorithms for common distributed machine learning collectives
across arbitrary input network topologies. TACOS was able to synthesize
All-Reduce algorithm for a heterogeneous 512-NPU system in just 6.09 minutes
while achieving performance improvement up to 4.27x over state-of-the-art prior
work. TACOS exhibits high scalability, with synthesis time scaling
quadratically with the number of NPUs. In contrast to prior works' NP-hard
approaches, TACOS with 40K NPUs completes in 2.52 hours.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要