Scheduling Deep Learning Training in GPU Cluster Using the Model-Similarity-Based Policy

INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2023, PT II(2023)

引用 0|浏览5
暂无评分
摘要
Training large neural networks with huge amount of data using multiple Graphic Processing Units (GPUs) became widespread with the emergence of Deep Learning (DL) technology. It is usually operated in datacenters featuring multiple GPU clusters, which are shared amongst users. However, different GPU architectures co-exist on the market and differ in training performance. To maximise the utilisation of a GPU cluster, the scheduler plays an important role in managing the resources by dispatching the jobs to the GPUs. An efficient scheduling strategy should take into account that the training performance of each GPU architecture varies for the different DL models. In this work, an original model-similarity-based scheduling policy is introduced that takes into account the GPU architectures that match with the DL models. The results show that using the model-similarity-based scheduling policy for distributed training across multiple GPUs of a DL model with a large batch size can reduce the makespan.
更多
查看译文
关键词
Deep learning,Distributed Training,GPU Cluster,Scheduling,Scheduling Policy,Similarity Measurement
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要