GAS: GPU Allocation Strategy for Deep Learning Training Tasks.

SmartWorld/UIC/ScalCom/DigitalTwin/PriComp/Meta(2022)

引用 0|浏览0
暂无评分
摘要
Nowadays, with the significant increasement of the deep learning training (DLT) task workload in GPU clusters, the number and the scale of GPU clusters grow rapidly. A crucial question is how to efficiently schedule DLT tasks with limited cluster resources. Existing GPU schedulers do not fully consider the connection between users and clusters, and few methods optimize the GPU allocation of DLT tasks. In this study, we propose a scheduling framework for GPU clusters, which improves performance and reduces energy consumption of clusters. We first analyze the relationship between the characteristics of performance and energy consumption and the task configurations for DLT tasks. Then, we propose a prediction method to predict the completion time and energy consumption of DLT tasks. To make better use of cluster resources, based on the prediction model, we propose GAS, which adopts the GPU Allocation Strategy by specifying the parallelism for DLT tasks. Compared to FIFO and SJF schedulers, GAS reduces the makespan by 19.6%-19.8%, reduces the average queueing time by 84.4%-93.9% and reduces the energy consumption by 22.2%22.5%. For users, GAS also reduces the cost of users by 21.3%21.6%. The large-scale simulation experiment further illustrates the effectiveness and scalability of GAS.
更多
查看译文
关键词
Deep Learning Training Task,GPU Cluster,Prediction Model,GPU Allocation,Scheduler
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要